1 / 17

UTA Site Report

UTA Site Report. Jae Yu Univ. of Texas, Arlington. 3 rd DOSAR Workshop University of Oklahoma Sept. 21 – 22, 2006. Introduction. UTA’s conversion to ATLAS experiment is in its final throes Kaushik De is co-leading Panda development Part of ATLAS SW Tier 2 is at UTA

fawn
Download Presentation

UTA Site Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UTA Site Report Jae Yu Univ. of Texas, Arlington 3rd DOSAR Workshop University of Oklahoma Sept. 21 – 22, 2006

  2. Introduction • UTA’s conversion to ATLAS experiment is in its final throes • Kaushik De is co-leading Panda development • Part of ATLAS SW Tier 2 is at UTA • Its phase I implementation in progress • Jae Yu focus on MonALISA based Panda monitoring • No significant DØ production work done during the past several months • HEP group working with other discipline in shared use of existing computing resources, notably DPCC

  3. UTA DPCC • UTA HEP-CSE + UTSW Medical joint project • NSF MRI supported • Hardware Capacity • Linux system • 197 CPU’s of mixture of P4 Xeon 2.4 – 2.6 GHz • Total disk: 76.2TB • Total Memory: 1GB/CPU • Network bandwidth: 68Gb/sec • Additional equipment will be purchased (about 2 more racks) • 3 IBM PS157 Series Shared Memory system • 8 1.5GHz Power5 processors • 32 GB RAM • 6 140GB Internal Disk drives • 1 2Gb fibre Channel Adapter • 2 Gigabit Ethernet Nics

  4. UTA DPCC • Participated strongly on DØ and ATLAS MC production as well as DØ data reprocessing • Other disciplines also use this facility • Biology, Geology, UTSW medical, etc • Converted over for more focused ATLAS tasks • Will use opportunistic computing tactics for DØ and other Tasks • Old farm for D0 has been shutdown and taken apart to bunches of test clusters

  5. UTA – RAC (DPCC) • 84 P4 Xeon 2.4GHz CPU = 202 GHz • 5TB of FBC + 3.2TB IDE Internal • GFS File system • 100 P4 Xeon 2.6GHz CPU = 260 GHz • 64TB of IDE RAID + 4TB internal • NFS File system • Total CPU: 462 GHz • Total disk: 76.2TB • Total Memory: 168Gbyte • Network bandwidth: 68Gb/sec • HEP – CSE Joint Project • DØ+ATLAS • CSE Research

  6. SWT2 • Joint effort between UTA, OU, LU and UNM • Phase I completed and is up and running • Ready for DA job receptions • Equipment for Phase II to be located in our new CPB room being looked into

  7. SWT2 Phase II

  8. Networks • Network • Had DS3 (44.7MBits/sec) till late 2004 • Increased to OC3 (155 MBits/s) early 2005 • OC12 as of early 2006 • Expected to be connected to NLR (10GB/s) through LEARN soon (http://www.tx-learn.org/) • $9.8M ($7.3M for optical fiber network) state of Texas funds approved in Sept. 2004

  9. Status

  10. Software Development Activities • MonALISA based ATLAS distributed analysis monitoring • The feasibility has been investigated • Scalability has been tested within the UTA domain

  11. Results of Scalability Test in SI95 Machine w/ ApMons Machine w/ Repository Machine w/ MN server 2kHz

  12. Software Development Activities • MonALISA based ATLAS distributed analysis monitoring • The feasibility has been investigated • Scalability has been tested within the UTA domain • Hired a software specialist to focus on development of the distributed analysis system • Located at BNL • Work closely with Panda team as an integral part • Completed the code modification for apmon to Panda

  13. Proposed MonALISA Based Panda Monitoring System MonALISA Service MonALISA Repository or Web Service client DashBoard DB MonALISA Logging Mechanism (ApMons) Logging Mechanism (File, Http) DashBoard Client Side (Pilot, job, Scheduler)

  14. ATLAS DA Dashboard • LCG sites report to One MonALISA service and one repository • CERN colleagues implemented an ATLAS DA dashboard • OSG Sites different • Extremely democratic  Each site has its own MonALISA server and repository • An Apmon developed for each job to report to MonALISA server • MonALISA server will respond when prompted by the Dashboard • Code for ATLAS OSG sites completed and undergoing a test before release and deployment

  15. CSE Student Exchange Program • Joint effort between HEP and CSE • David Levine is the primary contact at CSE • A total of 10 CSE MS Students each have worked in SAM-Grid team • Five generations of the student • This program ended as of Aug. 31, 2006 • New program with BNL being implemented • First set of two students started working summer 2006 • Participating in ATLAS Panda projects • Will write theses for documentation

  16. Conclusions • UTA’s transition from DØ to ATLAS is in its final throes • ATLAS DA work with Panda team moving along well • Our proposal for MonALISA based DA monitoring adopted as the initial direction for ATLAS monitoring system • Actively participating in ATLAS CSC analyses • The new network capacity of 10GB/s moving along

More Related