1 / 10

Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007

Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007. Tony Cass. Basic Issues. Computing equipment is not becoming more energy efficient Or, rather, not as rapidly as performance improves Rack power density is increasing From 1.5kW to 8kW now with 15+kW foreseen

rania
Download Presentation

Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Power and Cooling ChallengesatCERNIHEPCCC MeetingApril 24th 2007 Tony Cass

  2. Basic Issues • Computing equipment is not becoming more energy efficient • Or, rather, not as rapidly as performance improves • Rack power density is increasing • From 1.5kW to 8kW now with 15+kW foreseen • Power demand will grow with increasing requirement for LHC computing • Conservative assumptions lead to ~20MW by 2020 • “Moore’s law” growth in capacity, as seen for LEP, leads to prediction of ~100MW by 2020. • “Critical” IT loads are at planned capacity limit of 250kW now and demand is growing • “critical” => with infinite diesel backup in the event of severe power outage. • “physics” load loses power after <10 minutes if both French and Swiss power are unavailable. Housing Future Computing Equipment - 2

  3. Evolution of power and performance INTEL and AMD processors installed at CERN 8 cores 2.33 GHz 4 cores 3 GHz 4 cores 2.2 GHz 2 cores 1 GHz 2 cores 2.4 GHz 2 cores 2.8 GHz

  4. Basic Issues • Computing equipment is not becoming more energy efficient • Or, rather, not as rapidly as performance improves • Rack power density is increasing • From 1.5kW to 8kW now with 15+kW foreseen • Power demand will grow with increasing requirement for LHC computing • Conservative assumptions lead to ~20MW by 2020 • “Moore’s law” growth in capacity, as seen for LEP, leads to prediction of ~100MW by 2020. • “Critical” IT loads are at planned capacity limit of 250kW now and demand is growing • “critical” => with infinite diesel backup in the event of severe power outage. • “physics” load loses power after <10 minutes if both French and Swiss power are unavailable. Housing Future Computing Equipment - 4

  5. Project Power Evolution Housing Future Computing Equipment - 5

  6. Basic Issues • Computing equipment is not becoming more energy efficient • Or, rather, not as rapidly as performance improves • Rack power density is increasing • From 1.5kW to 8kW now with 15+kW foreseen • Power demand will grow with increasing requirement for LHC computing • Conservative assumptions lead to ~20MW by 2020 • “Moore’s law” growth in capacity, as seen for LEP, leads to prediction of ~100MW by 2020. • “Critical” IT loads are at planned capacity limit of 250kW now and demand is growing • “critical” => with infinite diesel backup in the event of severe power outage. • “physics” load loses power after <10 minutes if both French and Swiss power are unavailable. Housing Future Computing Equipment - 6

  7. Follow on issues • The Meyrin site (with the Computer Centre) is at ~maximum consumption • 66MVA; limited by autotransfer system (between French & Swiss supplies) and feed from Prévessin. • Diesel capacity is limited to 350kW for CC • we will couple the two 300kVA UPS modules to gain headroom at the expense of redundancy; no clear growth path thereafter • (redundant) local diesel capacity?? • B513 is very poorly designed from a modern HVAC standpoint • cooling 2.5MW will be a struggle, although there a number of optimisations still to make. • CFD simulations interesting, but hampered by lack of real data on server air flow rates. Housing Future Computing Equipment - 7

  8. What are we doing? • Convincing ourselves air cooling is OK • Mostly done; power density of up to ~20kW/rack looks achievable within optimally designed building (long and thin, not square; unobstructed airflows; rigorous hot/cold aisle separation) • As an alternative, prefer “open” racks with heat exchanger on the back to “closed” racks with internal air flow. • better able to cope with failure or, and more likely, door openings. • Studying future options • using the existing computer centre, for example by installing equipment at a higher power/m2 density; • using the “barn” [adjacent to the current machine room]; • using alternative buildings on one of the CERN sites - e.g. the former water tank (B226), the B186 assembly hall and B927 on the Prévessin site; • renting or purchasing space in a computing centre in the Geneva area; • purchasing the full computing service from a service provider (e.g. Amazon’s Computing Cloud). • Shipping container options (stop gap for 2010/11 if we don’t have a definitive solution by then???) Housing Future Computing Equipment - 8

  9. Summary • Power demand will exceed capacity by 2010 at the latest. • Considering options to deliver increased capacity • but we are behind schedule to meet the 2010 crossover so stopgap solutions may be necessary. • Money will be needed • Intel consider construction cost of new centre is $6/W. 40-60M€ for 20MW facility? But a modular design would spread costs. • Operation is 350k€/compute-MW/year assuming an HVAC overhead of 30% Housing Future Computing Equipment - 9

More Related