1 / 13

The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting

The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting. Shawn McKee University of Michigan Joint Techs, FNAL July 16 th , 2007. Overview. The ATLAS collaboration has only ~year before it must manage large amounts of “real” data for its globally distributed collaboration.

hollie
Download Presentation

The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16th, 2007

  2. Overview • The ATLAS collaboration has only ~year before it must manage large amounts of “real” data for its globally distributed collaboration. • ATLAS physicists need the software and physical infrastructure required to: • Calibrate and align detector subsystems to produce well understood data • Realistically simulate the ATLAS detector and its underlying physics • Provide access to ATLAS data globally • Define, manage, search and analyze data-sets of interest • I will give a quick view of ATLAS plans & highlight the processing workflow we envision. This will be brief; most info is available from our recent USATLAS Tier-2/3 meeting presentations ATLAS Shawn McKee

  3. The ATLAS Computing Model • Computing Model is well evolved and documented in C-TDR • http://doc.cern.ch//archive/electronic/cern/preprints/lhcc/public/lhcc-2005-022.pdf • There are many areas with significant questions/issues to be resolved: • Calibration and alignment strategy is still evolving • Physics data access patterns partially exercised • Unlikely to know the real patterns until 2008! • Still uncertainties on the event sizes , reconstruction time • How best to integrate ongoing “infrastructure” improvements from research efforts into our operating model? • Lesson from the previous round of experiments at CERN (LEP, 1989-2000) • Reviews in 1988 underestimated the computing requirements by an order of magnitude! Shawn McKee The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

  4. ATLAS Computing Model Overview • We have a hierarchical model (EF-T0-T1-T2) with specific roles and responsibilities • Data will be processed in stages: RAW->ESD->AOD->TAG • Data “production” is well-defined and scheduled • Roles and responsibilities are assigned within the hierarchy. • Users will send jobs to the data and extract relevant data • typically DPD’s (Derived Physics Data) or similar • Goal is a production and analysis system with seamless access to all ATLAS grid resources • All resources need to be managed effectively to insure ATLAS goals are met and resource providers policy’s are enforced. Grid middleware must provide this Shawn McKee The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

  5. ATLAS Facilities and Roles • Event Filter Farm at CERN • Assembles data (at CERN) into a stream to the Tier 0 Center • Tier 0 Center at CERN • Data archiving: Raw data to mass storage at CERN and to Tier 1 centers • Production: Fast production of Event Summary Data (ESD) and Analysis Object Data (AOD) • Distribution: ESD, AOD to Tier 1 centers and mass storage at CERN • Tier 1 Centers distributed worldwide (10 centers) • Data steward: Re-reconstruction of raw data they archive, producing new ESD, AOD • Coordinated access to full ESD and AOD (all AOD, 20-100% of ESD depending upon site) • Tier 2 Centers distributed worldwide (approximately 30 centers) • Monte Carlo Simulation, producing ESD, AOD, ESD, AOD sent to Tier 1 centers • On demand user physics analysis of shared datasets • Tier 3 Centers distributed worldwide • Physics analysis • A CERN Analysis Facility • Analysis • Enhanced access to ESD and RAW/calibration data on demand Shawn McKee The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

  6. USATLAS Tier-2/Tier-3 Meeting • In mid June 2007 we held our first joint USATLAS Tier-2/Tier-3 Meeting • Hosted at Indiana University (Bloomington) • June 20-22nd 2007 • Indico has the agenda and talks available: • http://indico.cern.ch/conferenceDisplay.py?confId=15523 • The first half of the meeting focused on Tier-3 concerns • Second half was concentrated on Tier-2 issues and planning • See slides from Amir Farbin which provide a very good overview of the analysis needs from the point of view of a physicist. • http://indico.cern.ch/getFile.py/access?contribId=30&sessionId=4&resId=0&materialId=slides&confId=15523 • http://indico.cern.ch/getFile.py/access?contribId=22&sessionId=8&resId=0&materialId=slides&confId=15523 Shawn McKee The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

  7. Slide From Amir Farbin Shawn McKee

  8. ATLAS Resource Requirements in for 2008 Computing TDR Recent (July 2006) updates have reduced the expected contributions Shawn McKee The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

  9. Slide From Amir Farbin

  10. Slide From Amir Farbin Shawn McKee

  11. The ATLAS computing model assumes 12 Tier-2 “cores” per physicist This won’t be able to provide a timely turn-around for most analysis work. Assumption is Tier-3 should additionally provide 25 more cores and around 50TB/year Network and Resource Implications Shawn McKee Networks for “Tier-3” scale analysis should provide ~10MBytes/sec per core Typical 8 core machine requires gigabit “end-to-end” connectivity; but in bursts Will Tier-2’s and Tier-3 have sufficient useable bandwidth (end-to-end issues)? The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

  12. To date most requirements envisioned for LHC scale physics from the network have yet to be realized. Once real data is flowing this will change quickly End-sites (Tier-2 or Tier-3) must be ready to accommodate needs Planning for 2008 Shawn McKee Physicist’s will need very high network performance in “bursts”. Ideally a multiplexed form of network access/usage could provide sufficient capabilities. End-to-end issues will need to be addressed The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

  13. Within a year real LHC data will begin flowing Physicists globally will be intently working to access and process data…there will be implications for networks, storage systems and computing resources. Planning should provide for reasonable network infrastructure: Typical Tier-2: 10+ Gbps Typical Tier-3: 1 (to 10) Gbps (depends on number of physicists and size of resources) Network services incorporated from research areas may be needed to insure end-to-end capabilities and effective resource management Shortly we will be living in “Interesting Times”… Conclusions Shawn McKee The ATLAS Computing Model and USATLAS Tier-2/3 Meeting

More Related