1 / 14

Power and Cooling of HPC Data Centers Requirements

Power and Cooling of HPC Data Centers Requirements. Roger A Panton Avetec Executive Director DICE rpanton@avetec.org. Background and Objective. Avetec is under contract to evaluate power and cooling requirements for HPC data centers

juliankane
Download Presentation

Power and Cooling of HPC Data Centers Requirements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Power and Cooling of HPC Data Centers Requirements Roger A Panton Avetec Executive Director DICE rpanton@avetec.org

  2. Background and Objective • Avetec is under contract to evaluate power and cooling requirements for HPC data centers • To survey power and cooling constraints and solutions, both current and future • From representative HPC data centers • Vendors including HPC systems, equipment and facilities • Avetec contracted with IDC to conduct a survey: • Current power and cooling situation • Planning in place to address requirements • Forecasted solutions in the next three to five years

  3. Survey Sample and Methodology • Survey includes 41 respondents • 28 HPC data centers • 13 vendors of products and services • Response rate was approximately 20% • Respondents were from US, Europe and Asia • HPC data centers were selected from the Top500 list • Centers selected fell between number 50 and 250 • Interviews were conducted by phone or in person • Respondents had the option to complete the survey on their own

  4. Initial General Findings • HPC data centers’ averages • Available floor space about 26,000 ft2 • Used floor space about 17,000 ft2 • Cooling capacity 22.7 million BTUs or 1,839 tons • Annual power consumption 6.356 MW • HPC data centers costs • Annual power cost is $2.9 million or $456 per KW • Ten sites provided the percentage of their budget spent on power—average was 23% • Two-thirds of the sites had budget for power and cooling upgrades • Average amount budgeted is $6.87 million

  5. Initial Key Findings • Study revealed wealth of information • Key findings will be summarized in the following four areas: • Current Situation • Challenges and Expansion Constraints • Current Approaches • Future Solutions and Technologies

  6. Current Situation • Over 96% of the centers consider “green” design important • Majority of sites expect power and cooling to impact future HPC center planning • Majority of respondents have studied or implemented “greener” operations • Most centers have used software models to analyze heat flow and/or power consumption • Approximately half of the centers paid for power and cooling out of their budgets

  7. Challenges and Expansion Constraints • Majority of centers are starting to consider power and cooling efficiency equal to or more important than HPC computing performance • Power and cooling issues are becoming the biggest barriers to expansion and upgrades • HPC vendors are starting to see power and cooling as a brake on performance

  8. Current Approaches • Power and cooling costs are becoming a key factor in upgrade decisions • Majority of centers have accomplished an air flow analysis to improve air cooling efficiency • Use of chilled water for cooling is increasing • The power and cooling issues are being discussed across the HPC community, i.e. data center, HPC systems vendors, and processor vendors

  9. Future Solutions and Technologies • Approximately two-thirds of centers plan to expand or build new data centers • About half of the data centers have or are planning to distribute HPC resources • Liquid computing is being considered as an alternative • HPC centers and vendors differ sharply on the likelihood of any game-changing cooling technologies emerging in the next 3-5 years

  10. Viewpoint • Current status • June 2008 marked a new milestone the first Pflop HPC system made the Top500 • Top500 lists the sum of the top 500 systems: • 1993: Sum of top 500 equaled 1 ½ Tflops • 2004: Sum of top 500 equaled 1 Pflop • If current growth is maintained, sum will be 100 Pflops by 2012 • The balance between facilities and infrastructure to accommodate new systems does not exist • This imbalance leads to a policy question • What should the supercomputer community response be to restore the balance?

  11. Policy Discussion • Should the community take a proactive position through collaborative discussions and then recommend a set of Public Policies? • To start the discussions should: • The Federal Government establish a timeframe and fund the following research areas: • Invest to maintain the current performance growth in HPC? • Invest in new cooling technologies to improve efficiencies? • Invest in low power higher performance processor(s)? • Invest in new material research for chips? • HPC data centers need to become more accountable for power and cooling consumption

  12. Join Us to Learn Other Findings • Findings were extensive and cannot be fully covered today, such as: • What alternative approaches are you exploring to meet future power & cooling requirements? • Describe any new power & cooling solutions expect to implement in next 2-3 years. • What special cooling methods do you expect to use in the future? • Future HPC system acquisition power & cooling requirements. • And more! • Final study results will be unveiled at DICE Alliance 2009 on May 20 along with a panel discussion with some of the study participants

  13. Want More Information? We Invite You to Attend DICE Alliance 2009 May 18-20 Wittenberg University Barbara Kuss Science Center Springfield, Ohio Register at www.diceprogram.org

  14. Other DICE Alliance Topics • Monday May 18th 6:00-8:00 Opening Reception • Tuesday May 19th • Keynote: Be Careful What You Wish For (Jay Boisseau, PhD) • America’s Need for HPC (Retired, Congressman Dave Hobson) • Panel: Power & Cooling Efficiency in Data Centers (Ed Wahl, lead) • Panel: Public Policy in HPC (Charlie Hayes, lead) • Wednesday May 20th • Multicore Architecture: Panacea or Nightmare? (Earl Joseph, PhD) • Panel: The Integration of Scheduling HPC Resource Management & Data Lifecycle Management Tools (Jay Blair) • Panel: Power & Cooling Trends and Technology (Earl Joseph, PhD)

More Related