1 / 25

Future O 2 Installation @ LHC P2

Future O 2 Installation @ LHC P2. U. Fuchs, P. Vande Vyvre. ALICE O 2 Project. Data of all interactions shipped from detector to online farm in triggerless continuous mode. HI run 1.1 TByte /s. FLP. Data volume reduction by cluster finder No event discarded

englishm
Download Presentation

Future O 2 Installation @ LHC P2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Future O2 Installation @ LHC P2 U. Fuchs, P. Vande Vyvre

  2. ALICE O2 Project Data of all interactions shipped from detector to online farm in triggerless continuous mode HI run 1.1 TByte/s FLP Data volume reduction by cluster finder No event discarded Average factor 2.2(factor 2.5 for the TPC data) • Global computing needs: • ~ 100 k CPU cores • Use of accelerators:~ 5000 GPUs and ~ 500 FPGAs • ~ 60 PB of storage 500 GByte/s EPN Data volume reduction by tracking All the events go to data storage Average factor 5.5 (factor 8 for the TPC data) 90 GByte/s Tier 0 (Computing Centre Meyrin) 20 GByte/s DS Data Storage: 1 year of compressed data • Bandwidth: Write 90 GB/s Read 90 GB/s • Capacity: 60 PB Tiers 1 and Analysis Facilities Asynchronous event reconstruction with finalCalibration with a delay of few hours. ALICE O2

  3. O2 Facility @ Pt2 The O2 facility will consist of 3 major parts: • Read-Out – First-Level Processors (FLP) • Connected to the detector frontend • Will stay in CR1 (actual computing room located in the LHC access shaft in the SX hall bld 2285) and utilize the racks, power and cooling upgraded/installed during LS1. • Event Processing Nodes (EPN), Quality Control (QC), Data Storage (DS) • Connected to Read-Out via IT network (fibers) • Large installation (~2MW), needs a new computing room • Present CRs (CR1 and CR2) were built for LEP/L3, they do not provide an adequate environment for the whole O2 facility: lack of Rack-Space, power, cooling • Weight issue (CRs are suspended in the access shaft) • Services, Network ALICE O2

  4. Computing room CR0 A new computing room (“CR0”) is needed: • Space Requirement • Min. 2000 U, possible in 45 large racks • Power Requirement • IT: 115 kW UPS + 1937 kW Normal Power • Total (IT+Infrastructure): 240 kW UPS + 2062 kW Normal Power • Cooling Requirement • Min. 2.5 MW cooling capacity • Foreseen Placement: • Immediate vicinity of hall 2285 to re-use a maximum of services already present and to reduce costs for cabling ALICE O2

  5. CR0Tentative Location ALICE O2

  6. CR0Tentative Location 27m 24m ALICE O2

  7. Power Needs ALICE O2

  8. Power Needs Evolution ALICE O2

  9. O2 Facility @ Pt2, Power Needs ALICE O2

  10. O2Facility @ Pt2, Power Needs The power needs of the O2 System: • UPS Power (ALICE: 500kW available for O2): • 228 kW needed in present CR1 (all in place) • 225 kW needed outside for CR0 (TBD) • Normal Power: • No need for normal power in CR1 • 2 MW of additional normal power needed outside for CR0 (TBD) • Possibility of adding 500 kW ? (hardware evolution) • The installation will start in ~Q2 2018. So the power should be available ~1 year before. ALICE O2

  11. Schedule Q2 2018 Start of installation in CR0 ALICE O2

  12. Computing Room 0 (CR0) Options ALICE O2

  13. CR0: building options Option (I): Container-based • Foundation (concrete slab) and services (electricity and water) needed • Container has different noise screening than a building, probably additional noise screens are needed • Containers are fast to deploy (6-8 months including site preparation) and to extend (4 months) Option (II): A new building • Civil engineering work for a new building (can be a simple “sandwich wall” structure) • Electricity distribution in building and to racks (e.g. Canalis) • Engineering of a Cooling System depending on the cooling solution chosen ALICE O2

  14. Examples of container computing rooms ALICE O2

  15. Cooling Options ALICE O2

  16. Operating temperatures of IT equipment • IT equipment quite efficient at exhausting its own heat Operating temperature is the inlet temperature • IT fans generally bottom out at around 24 C • Dell recommended temperature in 2009: 24-27 C (1) • Most of today’s IT equipment can run at > 30 C inlet temperature, some even up to 43 C (1) Data Center Operating Temperature: What Does Dell Recommend, Dell, 2009 (2) Data Center Operating Temperature: The Sweet Spot, Dell, 2011 (3) 2011 Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance; ASHRAE TC9.9; 2011: ALICE O2

  17. CR0: cooling options • CR0 shall have a Power Usage Effectiveness (PUE) value of better/lower than 1.15 • The IT power needs are the same for all solutions • Several options are being studied based on cooling technology: • A Container based Solution • Chilled-Water Cooling • Primary-Water Cooling • Free-Air Cooling • A new Building • Rack-Cooling with Chilled Water • Rack-Cooling with Primary Water • Free-Air Cooling ALICE O2

  18. O2Facility @ Pt2, Cooling Options Cooling Option (1): Chilled Water • In General • Needs a chilled water computing room than can accept a load of 2.5 MW • Running chillers is not a “GreenIT” practice, PUE will be [1.4 .. 2] • Needs additional power to be installed for chillers • In a building: • Racks with water-cooled back door exist up to 32 kW only • e.g. KNURR CoolTherm, 38U usable, 80cm wide, 130cm deep (74/84 usable – not deep enough, min 92cm needed) • Not very space efficient, the footprint of the computing room will be big • In a Container: • Different Rack/Back Door or In Row cooling options exist • Same constraints (kW/rack, U usable ..) as for a building ALICE O2

  19. O2Facility @ Pt2, Cooling Options Cooling Option (2): Primary Water (24 C, max 28 C) • In General • Primary water can be used directly to cool equipment ? • If not: additional heat exchanger needed (+3 C) • Direct Liquid Cooling (DLC) excluded due to type of equipment (no solution for main heat contributors: disks, GPUs) • In a building: • No Rack cooling solution found for racks>32kW using 22/24/27 C water • In a Container: • A solution exists for water at ~ 22  C, so additional chillers would be needed to cool down from primary water temperature • Better than chilled water but still not GreenIT - friendly ALICE O2

  20. O2Facility @ Pt2, Cooling Options Cooling Option (3): Free Air Cooling (FAC) • In General • State-of-the-art solution, used by many large computing centers (EBay, Google (1), Microsoft, YAHOO (2), ..) • The O2 power density is probably higher(1) Google’s Chiller-Less Data Center, Data Center Knowledge, July 15, 2009 (2) Yahoo Computing Coop: Shape of Things to Come?‖ Data Center Knowledge, April 26, 2010. • In a building: • Needs a special building geometry to optimize the airflow. • Hot-aisle separation and immediate heat evacuation (roof) • Challenge: Temperature/Humidity control ALICE O2

  21. O2Facility @ Pt2, Cooling Options Cooling Option (3): Free Air Cooling (FAC) • In a Container: • Two major suppliers identified: DELL and HP following different philosophies: • DELL: outside-airflow with recycling to control temperature and humidity, chillers can be added but are not an integral part • HP: closed air volume for IT cooled by chillers running in FreeCooling ALICE O2

  22. O2Facility @ Pt2, Cooling Options Cooling Option (3): Free Air Cooling (FAC) • In a Container, cont’d • PUE value depends on IT installed in container • DELL solution can achieve 1.07-1.05 • HP runs at 1.15 year-average (due to chillers) • Additional Cooling is available (if outside too hot) • DELL: evaporative cooling (default), DX chillers if needed • HP: air-air chillers • Racks are vendor-neutral vis-à-vis installed IT equipment • Power density of 50 kW/rack achievable ALICE O2

  23. Free cooling map Free air in Geneva: <= 27 C 85% of the year, <= 40 C 100% of the year Geneva ALICE O2

  24. Summary ALICE O2

  25. O2Facility @ Pt2 • Power: • Needs are well established • Cooling: • Preference for Free-Air-Cooling due to simplicity and running costs; CERN has a commitment to Green Technologies. • Available solutions for chilled/primary water cooling will require more racks • Computing room: • Need to evaluate both solutions (container and new building) • Choice depending on features, price and delivery lead time ALICE O2

More Related