1 / 22

US-CMS Meeting (UC Riverside) May 19, 2001

Grids for US-CMS and CMS. Paul Avery University of Florida avery@phys.ufl.edu. US-CMS Meeting (UC Riverside) May 19, 2001. Tier 0 (CERN). 3. 3. 3. 3. T2. T2. 3. T2. Tier 1. 3. 3. T2. T2. 3. 3. 3. 3. 3. 3. 4. 4. 4. 4. LHC Data Grid Hierarchy.

genevieve
Download Presentation

US-CMS Meeting (UC Riverside) May 19, 2001

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grids for US-CMS and CMS Paul Avery University of Florida avery@phys.ufl.edu US-CMS Meeting (UC Riverside) May 19, 2001 Paul Avery

  2. Tier 0 (CERN) 3 3 3 3 T2 T2 3 T2 Tier 1 3 3 T2 T2 3 3 3 3 3 3 4 4 4 4 LHC Data Grid Hierarchy Tier0 CERNTier1 National LabTier2 Regional Center at UniversityTier3 University workgroupTier4 Workstation Tasks • R&D • Tier2 centers • Software integration • Unified IT resources Paul Avery

  3. Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center HPSS HPSS HPSS HPSS CMS Grid Hierarchy CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1 Experiment ~PBytes/sec Online System ~100 MBytes/sec Bunch crossing per 25 nsecs.100 triggers per second~1 MByte per event CERN Computer Center > 20 TIPS Tier 0 +1 HPSS >10 Gbits/sec France Center Italy Center UK Center USA Center Tier 1 2.5-10 Gbits/sec Tier 2 0.6-2.5 Gbits/sec Tier 3 Institute ~0.25TIPS Institute Institute Institute Physics data cache Physicists work on analysis “channels”. Each institute has ~10 physicists working on one or more channels 0.1-1 Gbits/sec Tier 4 Workstations,other portals Paul Avery

  4. Grid Projects • Funded projects • GriPhyN USA NSF, $11.9M • PPDG I USA DOE, $2M • PPDG II USA DOE, $9.5M • EU DataGrid EU $9.3M • Proposed projects • iVDGL USA NSF, $15M + $1.8M + UK • DTF USA NSF, $45M + $4M/yr • DataTag EU EC, $? • Other national projects • PPARC e-Science UK PPARC, $40M • UK e-Science UK > $100M • Italy, France, Japan ? • EU networking initiatives (Géant, Danté, SURFNet) Paul Avery

  5. Major Grid News Since May 2000 • Sep. 2000 GriPhyN proposal approved ($11.9M) • Nov. 2000 First outline of US-CMS Tier2 plan • Nov. 2000 Caltech-UCSD proto-T2 hardware installed • Dec. 2000 Submit iVDGL preproposal to NSF • Jan. 2001 EU-DataGrid approved ($9.3M) • Mar. 2001 1st Grid coordination meeting • Mar. 2001 Submit PPDG proposal to DOE ($12M) • Apr. 2001 Submit iVDGL proposal to NSF ($15M) • Apr. 2001 Submit DTF proposal to NSF ($45M, $4M/yr) • Apr. 2001 Submit DataTag proposal to EU • May 2001 PPDG proposal approved ($9.5M) • May 2001 Initial hardware for Florida proto-T2 installed • Jun. 2001 2nd Grid coordination meeting • Aug. 2001 DTF approved? • Aug. 2001 iVDGL approved? Paul Avery

  6. Submit GriPhyN proposal, $12.5M Q2 00 Q3 00 GriPhyN approved, $11.9M Q4 00 Outline of US-CMS Tier plan DTF approved? EU DataGrid approved, $9.3M Caltech-UCSD install proto-T2 Submit DTF proposal, $45M 2nd Grid coordination meeting Submit PPDG proposal, $12M iVDGL approved? Submit iVDGL preproposal 1st Grid coordination meeting Q1 01 Q2 01 Submit iVDGL proposal, $15M PPDG approved, $9.5M Q3 01 Install initial Florida proto-T2 Grid Timeline Paul Avery

  7. Why Do We Need All These Projects? • Agencies see LHC Grid computing in wider context • (next slide) • DOE priorities • LHC, D0, CDF, BaBar, RHIC, JLAB • Computer science • ESNET • NSF priorities • Computer science • Networks • LHC, other physics, astronomy • Other basic sciences • Education and outreach • International reach • Support for universities Paul Avery

  8. Projects (cont.) • We must justify investment • Benefit to wide scientific base • Education and outreach • Oversight from Congress always present • Much more competitive funding environment • We have no choice anyway • This is the mechanism by which we will get funds • Cons • Diverts effort from mission, makes management more complex • Pros • Exploits initiatives, brings new funds & facilities (e.g., DTF) • Drives deployment of high-speed networks • Brings many new technologies, tools • Attracts attention/help from computing experts, vendors Paul Avery

  9. US-CMS Grid Facilities • Caltech-UCSD implemented proto-Tier2 (Fall 2000) • 40 dual PIII boxes in racks • RAID disk • Tape resources • Florida now implementing second proto-Tier2 • 72 dual PIII boxes in racks • Inexpensive RAID • Ready June 1, 2001 for production? • Fermilab about to purchase equipment (Vivian) • Distributed Terascale Facility (DTF) • Not approved yet • MOUs being signed with GriPhyN, CMS • Massive CPU, storage resources at 4 sites, 10 Gb/s networks • Early prototype of Tier1 in 2006 Paul Avery

  10. Particle Physics Data Grid • Recently funded @ $9.5M for 3 years (DOE MICS/HENP) • High Energy & Nuclear Physics projects (DOE labs) • Database/object replication, caching, catalogs, end-to-end • Practical orientation: networks, instrumentation, monitoring Paul Avery

  11. University CPU, Disk, Users University CPU, Disk, Users Satellite Site Tape, CPU, Disk, Robot University CPU, Disk, Users PRIMARY SITE DAQ, Tape, CPU, Disk, Robot University CPU, Disk, Users University CPU, Disk, Users Satellite Site Tape, CPU, Disk, Robot PPDG: Remote Database Replication Site to Site Data Replication Service 100 Mbytes/sec PRIMARY SITE Data Acquisition, CPU, Disk, Tape Robot SECONDARY SITE CPU, Disk, Tape Robot • First Round Goal: Optimized cached read access to 10-100 Gbytesdrawn from a total data set of 0.1 to ~1 Petabyte • Matchmaking, Co-Scheduling: SRB, Condor, Globus services; HRM, NWS Multi-Site Cached File Access Service Paul Avery

  12. EU DataGrid Project         Paul Avery

  13. GriPhyN = App. Science + CS + Grids • GriPhyN = Grid Physics Network • US-CMS High Energy Physics • US-ATLAS High Energy Physics • LIGO/LSC Gravity wave research • SDSS Sloan Digital Sky Survey • Strong partnership with computer scientists • Design and implement production-scale grids • Develop common infrastructure, tools and services (Globus based) • Integration into the 4 experiments • Broad application to other sciences via “Virtual Data Toolkit” • Multi-year project • R&D for grid architecture (funded at $11.9M) • “Tier 2” center hardware, personnel • Integrate Grid infrastructure into experiments Paul Avery

  14. U Florida U Chicago Boston U Caltech U Wisconsin, Madison USC/ISI Harvard Indiana Johns Hopkins Northwestern Stanford U Illinois at Chicago U Penn U Texas, Brownsville U Wisconsin, Milwaukee UC Berkeley UC San Diego San Diego Supercomputer Center Lawrence Berkeley Lab Argonne Fermilab Brookhaven GriPhyN Institutions Paul Avery

  15. GriPhyN: PetaScale Virtual Data Grids Production Team Individual Investigator Research group Interactive User Tools Request Planning & Scheduling Tools Request Execution & Virtual Data Tools Management Tools Resource Other Grid • Resource • Security and • Other Grid Security and Management • Management • Policy • Services Policy Services Services • Services • Services Services Transforms Distributed resources Raw data (code, storage, computers, and network) source Paul Avery

  16. GriPhyN Progress • New hires • 3 physicists at Florida (1 PD, 2 scientists) • 0.5 Tier2 support person at Caltech • CMS requirements document • 33 pages, K. Holtman • Major meetings held (http://www.griphyn.org/) • Oct. 2000 All-hands meeting • Dec. 2000 Architecture meeting • Apr. 2001 All-hands meeting • Aug. 2001 Applications meeting • CMS – CS groups will need more frequent meetings • Further develop requirements, update architecture • Distributed databases • More discussion of integration Paul Avery

  17. Common Grid Infrastructure • GriPhyN + PPDG + EU-DataGrid + national efforts • France, Italy, UK, Japan • Have agreed to collaborate, develop joint infrastructure • Initial meeting March 4 in Amsterdam to discuss issues • Next meeting June 23 • Preparing management document • Joint management, technical boards + steering committee • Coordination of people, resources • An expectation that this will lead to real work • Collaborative projects • Grid middleware • Integration into applications • Grid “laboratory”: iVDGL • Network testbed: T3 = Transatlantic Terabit Testbed Paul Avery

  18. iVDGL • International Virtual-Data Grid Laboratory • A place to conduct Data Grid tests “at scale” • A mechanism to create common Grid infrastructure • National, international scale Data Grid tests, operations • Components • Tier1 sites (laboratories) • Tier2 sites (universities and others) • Selected Tier3 sites (universities) • Distributed Terascale Facility (DTF) • Fast networks: US, Europe, transatlantic • Who • Initially US-UK-EU • Japan, Australia • Other world regions later • Discussions w/ Russia, China, Pakistan, India, South America Paul Avery

  19. iVDGL Proposal to NSF • Submitted to NSF ITR2001 program April 25 • ITR2001 program is more application oriented than ITR2000 • $15M, 5 years @ $3M per year (huge constraint) • CMS + ATLAS + LIGO + SDSS/NVO + Computer Science • Scope of proposal • Tier2 hardware, Tier2 support personnel • Integration of Grid software into applications • CS support teams (+ 6 UK Fellows) • Grid Operations Center (iGOC) • Education and outreach (3 minority institutions) • Budget (next slide) • Falls short of US-CMS Tier2 needs (Tier2 support staff) • Need to address problem with NSF (Lothar, Irwin talks) Paul Avery

  20. iVDGL Budget Paul Avery

  21. Tier0/1 facility Tier2 facility Tier3 facility 10 Gbps link 2.5 Gbps link 622 Mbps link Other link iVDGL Map Circa 2002-2003 Paul Avery

  22. iVDGL as a Laboratory • Grid Exercises • “Easy”, intra-experiment tests first (20-40%, national, transatlantic) • “Harder” wide-scale tests later (50-100% of all resources) • CMS is already conducting transcontinental productions • Local control of resources vitally important • Experiments, politics demand it • Resource hierarchy: (1) National + experiment, (2) inter-expt. • Strong interest from other disciplines • HENP experiments • Virtual Observatory (VO) community in Europe/US • Gravity wave community in Europe/US/Australia/Japan • Earthquake engineering • Bioinformatics • Our CS colleagues (wide scale tests) Paul Avery

More Related