1 / 8

CMS-HI at the LHC: High Density QCD with Heavy Ions

Explore high density QCD with heavy ions at the LHC. Expect enhanced yields of hard probes accompanied by a longer-lived Quark Gluon Plasma state. Utilize the advanced capabilities of CMS as a Heavy Ion Collisions Detector.

Download Presentation

CMS-HI at the LHC: High Density QCD with Heavy Ions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMS-HI Offline Computing Charles F. MaguireVanderbilt University For the CMS-HI US Institutions

  2. CMS-HI at the LHC forHigh Density QCD with Heavy Ions • LHC: New Energy Frontier for Relativistic Heavy Ion Physics • Quantum Chromodynamics at extreme conditions (density, temperature, …) • Pb+Pb collisions at 5.5 TeV, thirty times larger than Au+Au at RHIC • Expecting a longer-lived Quark Gluon Plasma state accompanied by much enhanced yields of hard probes with high mass and/or transverse momentum • CMS: Excels as a Heavy Ion Collisions Detector at the LHC • Sophisticated high level trigger for getting rare important events at a rapid rate • Best momentum resolution and tracking granularity • Large acceptance tracking and calorimetry --> proven jet finding in HI events Draft Version May 12 at 11:00 AM CDT

  3. CMS-HI in the US • 10 Participating Institutions • Colorado, Iowa, Kansas, LANL, Maryland, Minnesota,MIT, UC Davis, UIC, Vanderbilt • Projected to contribute ~60 FTEs (Ph.D. and students) as of 2012 • MIT as lead US institution with Boleslaw Wyslouch as Project Manager • US-CMS-HI Tasks (Research Management Plan in 2007) • Completion of HLT CPU Farm Upgrade for HI events at CMS (MIT) • Construction of the Zero Degree Calorimeter at CMS (Kansas and Iowa) • Development of the CMS-HI Compute Center in the US (task force established) • Task force recommended that the Vanderbilt University group lead the proposal composition • Also want to retain the expertise developed by the MIT HI group at their CMS Tier2 site Draft Version May 12 at 11:00 AM CDT

  4. Basis of Compute Model for CMS-HI • Heavy Ion Data Operations for CMS at the LHC • Heavy ion collisions expected in 2009, second year of running for the LHC • Heavy ion running takes place during a 1 month period (106 seconds) • At designed DAQ bandwidth the CMS detector will be writing 225 TBytesof raw data per heavy ion running period, plus ~75 TBytes support files • Raw data will likely stay resident at CERN Tier0 disks for a few days at most,while transfers take place to the CMS-HI compute center in the US • Possibility to have 300 TBytes of disk dedicated to HI data (NSF REDDnet project) • Raw data will not be reconstructed at the Tier0, but will be written to awrite-only (emergency archive) tape system before deletion from Tier0 disks • Projected Data Volumes 2009 - 2011 (optimistic scenario) • 50 - 100 TBytes in first year of HI operations • 100 - 200 TBytes in second year of HI operations • 300 TBytes nominal size achieved in third year of HI operations Draft Version May 12 at 11:00 AM CDT

  5. Data Transport Options for CMS-HIFollowing ESnet-NP 2008 Workshop Recommendations • CMS-HEP Raw Data Transport from CERN to FNAL Tier1 • Using LHCnet to cross the Atlantic • LHCnet terminating at Starlight HUP • ESnet transport from Starlight into FNAL Tier1 centre • Links are rated at 10 Gbps • CMS-HI Raw Data Transport from CERN to US Compute Center • Network topology has not been established at this time • Vanderbilt is establishing a 10 Gbps path to SOX-Atlanta for end of 2008 • Network Options (DOE requires a non-LHCnet backup plan) • Use LHCnet to Starlight during one month when HI beams are being collidedTransport data from Starlight to Vanderbilt compute center via ESnet/Internet2Transfer links will still be rated at 10 Gbps to transfer data within ~1 month • Do not use LHCnet but use other trans-Atlantic links supported by NSF, with links rated at 10 Gpbs such that data are transferred over ~1 month • Install 300 TByte disk buffer at CERN and use non-LHCnet trans-Atlantic links to transfer data over 4 months (US-Alice model) at ~2.5 Gbps Draft Version May 12 at 11:00 AM CDT

  6. Data Transport Issues for CMS-HIFollowing ESnet-NP 2008 Workshop Recommendations • To Use LHCnet or Not To Use LHCnet • Use of LHCnet to US, following CMS-HEP path, is the simplest approach • A separate trans-Atlantic link will require dedicated CMS-HI certifications • DOE requires a non-LHCnet plan be discussed in CMS-HI compute proposal • Issues With the Use of LHCnet by CMS-HI • ESnet officials believe that LHCnet is already fully subscribed by FNAL • HI month was supposed to be used for getting final sets HEP data transferred • FNAL was quoted as having only 5% “headroom” left with use of LHCnet • HI data volume is 10% of the HEP data volume • Issues With the Non-Use of LHCnet by CMS-HI • Non-use of LHCnet would be a new path for data out of the LHC • CERN computer management would have to approve (same for US-ALICE) • Installing a 300 TByte buffer system at CERN (ESnet recommendation)would also have to approved and integrated into the CERN Tier0 operations Draft Version May 12 at 11:00 AM CDT

  7. Data Processing for CMS-HI • Data Processing Scenario • [Physics event choices, using Gunther’s tables] • [Reconstruction and analysis pass schedules] • [Role and access of other CMS-HI institutions] Draft Version May 12 at 11:00 AM CDT

  8. Implementation of CMS-HI Compute Center • Total Hardware Resources Ultimately Required • [Number of CPUs (SpecInt Units)] • [Disk space and models of use] • [Tape space] • Construction Scenario • [Plan to reach the totals above in 5 years] • [Compatibility with the expected data accumulation] • [Operations plan] Draft Version May 12 at 11:00 AM CDT

More Related