1 / 15

US LHC Tier-2s on behalf of US ATLAS, US CMS, OSG

US LHC Tier-2s on behalf of US ATLAS, US CMS, OSG. Ruth Pordes , Fermilab Nov 17th, 2007.

ori-rowland
Download Presentation

US LHC Tier-2s on behalf of US ATLAS, US CMS, OSG

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. US LHC Tier-2son behalf of US ATLAS, US CMS, OSG Ruth Pordes , Fermilab Nov 17th, 2007 Supported by the Department of Energy Office of Science SciDAC-2 program from the High Energy Physics, Nuclear Physics and Advanced Software and Computing Research programs, and the National Science Foundation Math and Physical Sciences, Office of CyberInfrastructure and Office of International Science and Engineering Directorates.

  2. US LHC Tier-2s Resources (2007/2008)

  3. Issues • Fast ramp up stress on purchasing and operational teams. • ATLAS Targets in 2010,2011 not met by current plans.

  4. All US LHC Tier-2s are part of OSG Experiments responsible for end-to-end systems. Operations Center dispatches and “owns” problems til they are solved. Activities provide common forums for s/w technical & operational issues. User Science Codes and Interfaces Applications VO Middleware Astrophysics Data replication etc Biology Portals, databases etc ATLAS and CMS software and services installed on sites through Grid interfaces. HEP Data and workflow management etc OSG Release Cache: OSG specific configurations, utilities, etc Infrastructure Virtual Data Toolkit (VDT) Core Grid Technologies + stakeholder needs: Condor, Globus, MyProxy: shared with and support for EGEE, TeraGrid, accounting, authz, monitoring, VOMS and others. Resource Batch queue configurations ensure priority to ATLAS or CMS jobs. Existing Operating, Batch systems and Utilities

  5. US LHC Tier-2s are fully integrated into the experiments • All sites are funded through the US NSF research program, except for DOE support for SLAC. • Provide monte-carlo processing and analysis/mc data hosting and CPU. • Distribute data to/from Tier-1s. Provide analysis centers for Tier-3->N physicists • All Tier-2s successfully accounting to the WLCG Tier-2 accounting reports. US ATLAS and CMS Tier-2s contributing at least their share to the ATLAS analysis challenge and the CMS Challenge for Software and Analysis (CSA07).

  6. ATLAS report - Michael Ernst, Rob Gardner • Robust data distribution supported by BNL Tier-1. • Support Panda pilot job infrastructure with DQ servers locally or remotely. • Athena analysis framework available locally. • Facility Integration program provides forum for Tier-2 administrators to communicate and have common solutions. • Computing Integration and Operations meetings and mail lists are effective forums. • Tier-2 workshops semi-annually • help newer Tier-2s get up to speed more quickly. • Mix of dCache and xrootd based storage elements.

  7. Atlas Concerns • Performance and scalability of dCache for analysis I/O needs. • End-to-end performance of data distribution and management tools.

  8. ATLAS Tier-1 distributes data to Tier2-s • Data distribution driven by Tier-2 processing and analysis needs. e.g. BNL to University of Chicago data distribution

  9. ATLAS Jobs US 33% UTA 96% Walltime Efficiency

  10. US CMS - report from Ken Bloom, Tier-2 coordinator • Funding of $500K/site provides 2 FTE/site for support. • Site specific configurations (e.g. different batch systems) but all sites have common approaches. • All use dCache to manage the storage. 1 FTE (of the 2) per site needed for this support.

  11. CMS Concerns • Robustness and performance of T1 sites hosting data • All Tier-1s serve data to US Tier-2s. The majority of CMS data will live across an ocean. Reliability is crucial. • Will the grid be sufficiently robust when we get to analysis? • Can user jobs get to the sites, and can the output get back? • Are we being challenged enough in advance of 2000 users showing up?

  12. Tier-2s Data Movement

  13. Job hosting 11/07

  14. Summary • US LHC Tier-2s are full participants in the US ATLAS, US CMS and OSG organizations. • Communications between the collaborations and the projects is good. • The 2 collaborations use each others resources when capacity is available. • Mechanisms are in place that priorities don’t get inverted! • The Tier-2s are ready to contribute to CCRC and data commissioning.

More Related