1 / 15

US-CMS User Facility and Regional Center at Fermilab

US-CMS User Facility and Regional Center at Fermilab. Matthias Kasemann FNAL. Outline. User Facility Goals + Schedule + Organization US-CMS Regional Center at Fermilab Hardware and Networking Data required Support Functions Size of Regional Center Cost and Personnel Profile Summary.

angelo
Download Presentation

US-CMS User Facility and Regional Center at Fermilab

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. US-CMS User Facilityand Regional Centerat Fermilab Matthias Kasemann FNAL

  2. Outline • User Facility • Goals + Schedule + Organization • US-CMS Regional Center at Fermilab • Hardware and Networking • Data required • Support Functions • Size of Regional Center • Cost and Personnel Profile • Summary US-CMS User facility Matthias Kasemann, FNAL

  3. User Facility: Goals • Scope and Size: support US-CMS collaboration (20% of full CMS). • Provide enabling infrastructure of software and computing to allow US physicists to fully participate in physics program of CMS. • Support the development and data analysis activities of US-CMS: • acquire, develop, install, integrate, commission and operate hardware and software. • This will include a major ‘Tier1’ regional computing center at Fermilab to support US physicists working on CMS. US-CMS User facility Matthias Kasemann, FNAL

  4. US_CMS RC1 at FNALSchedule + Organization • 1999 - 2003: R&D Phase • 2003 - 2005: Implementation Phase • 2006 onwards: Operations Phase • All Phases of the US-CMS RC1 will be managed and operated within the FNAL Computing Division. • FNAL-CD is collecting very qualified experience from Tevatron RunII preparations and operations. This will put us in a unique position to support collaborations of 4-500 CMS scientists doing pp physics. US-CMS User facility Matthias Kasemann, FNAL

  5. Fermilab Run II Computing and Software • Run II Parameters for DØ: • Trigger rate 50 Hz (LHC / 2) • Raw data event size 250 kB (LHC / 4) • Data collection 6 x 108 evts/ yr. (LHC / 1.6) • Summary event size 150 kB (LHC x 1.5) • Physics sum’ry evt size 10 kB (LHC) • Total dataset size 300 TB/yr. (LHC / 3) • Bottom line: Computing project ~ O (Run I x 20) ~ O (LHC / 2-3) This will be accomplished with resources available in 2000. US-CMS User facility Matthias Kasemann, FNAL

  6. US-CMS Regional Center at FNALHardware and Networking • Implement the main US-CMS regional center at Fermilab • include substantial CPU for analysis, reprocessing and simulation • data storage • data access facilities. • Networks and networking will play a key role in this distributed computing model: • Deliver data to other US-CMS institutions through high-speed networks. • High bandwidth network connection to CERN. Technology and policy in this area is in a state of flux. We have to track technology and participate in prototyping work. US-CMS User facility Matthias Kasemann, FNAL

  7. Data required at US-CMS RC1 • Event data available at regional center: • event samples for testing and code development, • a collection of very interesting events • the full set of the Analysis Object data (AOD) • about 10% of the raw and reconstructed data (ESD) • all available event meta data • Non-event data: • detector databases for calibration, monitoring, geometry, cabling • data related databases: production log, run conditions, trigger setups US-CMS User facility Matthias Kasemann, FNAL

  8. US-CMS Regional Center at FNALSupport Functions • Include user support personnel • training, documentation, code distribution, ... • Personnel to manage licenses and license acquisition. • Contract for needed services. • Responsibility and personnel to develop or acquire any software that is required to carry out its production and operation activities. • Provide support for many development activities during the detector construction period before data taking begins. • Provide ongoing support during operations phase of the experiment. US-CMS User facility Matthias Kasemann, FNAL

  9. R&D Phase activities • Participate in R&D to prove the concept of LHC regional centers: • MONARC testbed • Object database testbeds • Prototype regional center by 2002 • Networking testbeds In R&D phase and beyond: • Provide CMS user support: • documentation • code management and distribution • training • user help desk US-CMS User facility Matthias Kasemann, FNAL

  10. Size of the Tier 1 Regional Center • The base for these estimates is a set of figures for CMS offline computing at CERN that was made available by CMS in mid-1998. • CPU (1 TIPS = 106 MIPS  25k SpecInt95) • in 2005: 3.6 TIPS • 2006 on: 1.2 TIPS new + 1.2 TIPS replacement • Disk storage • in 2005: 108 TB • 2006 on: 40 TB new + 27 TB replacement • Serial storage • in 2005: 1 robot + 0.4 PB • 2006 on: 0.2 PB new + robot every 3 years US-CMS User facility Matthias Kasemann, FNAL

  11. Hardware cost estimation • Costs de-escalated according to technology-cost development expectations. • Actual costs for CPU and disk are based on our Run II experience: • CPU: $3.5M/TIPS in 2003 (extrapolated from 1999) • disk: $26.90/GB in 2003 (extrapolated from 1999) • The cost reflect the use of expensive SMP machines for analysis in Run II. • Cost for serial storage also based on Run II • $1.1M per robot plus $0.5M/PB for media US-CMS User facility Matthias Kasemann, FNAL

  12. Regional Center ScheduleImplementation and Operation phase • To distribute cost over 2003, 2004 and 2005 and profit from price/performance evolution we plan to acquire 10% of disk and CPU in 2003, 30% in 2004, 60% in 2005. • Integrated capacity installed per year: 2003 2004 2005 2006 CMS operation • CPU: 0.4 1.44 3.6 4.8 TIPS • disk: 11 43 108 148 GB • tape: 0.03 0.05 0.4 0.6 PB • robots: 1 0.33 robots US-CMS User facility Matthias Kasemann, FNAL

  13. Cost and Personnel Profile US-CMS User facility Matthias Kasemann, FNAL

  14. Cost Summary • Total costs through 2005: • hardware: $10.6 M • networking in US: $2.1 M • licenses: $0.1 M / year • Operating phase • 2006 onward: $3.9 M hardware + 35 people • Note: • This hardware costs are based on what it is costing to serve 400-500 people in CDF and DØ under the assumption of exponential price/performance evolutions ($9.1M for CDF or D0). • Numbers are conservative (SMP-CPU,…) US-CMS User facility Matthias Kasemann, FNAL

  15. US-CMS User FacilitySummary • The User Facility has to provide US-CMS scientists with a competitive infrastructure to fully participate in the science program of CMS. • FNAL with the experience of Tevatron experiments is well suited to host the US-CMS major ‘Tier 1’ Regional Center. • Setup and support of the Regional Center will happen during 2003 - 2005. Initial hardware cost estimations (w.o. networking) are $10.6M + 35 FTE’s (by 2005). • Operations cost estimates are $2.7M/year for hardware (w.o. networking) and about 35 people. • Networking cost will be substantial. US-CMS User facility Matthias Kasemann, FNAL

More Related