1 / 15

Power and Cooling at Texas Advanced Computing Center

Power and Cooling at Texas Advanced Computing Center. Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011. TACC Mission & Strategy.

luigi
Download Presentation

Power and Cooling at Texas Advanced Computing Center

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42nd HPC User Forum September 8, 2011

  2. TACC Mission & Strategy The mission of the Texas Advanced Computing Center is to enable scientific discovery and enhance society through the application of advanced computing technologies. To accomplish this mission, TACC: • Evaluates, acquires & operatesadvanced computing systems • Provides training, consulting, anddocumentation to users • Collaborates with researchers toapply advanced computing techniques • Conducts research & development toproduce new computational technologies Resources & Services Research & Development

  3. Recent History of Systems at TACC • 2001 – IBM Power4 system, 1 TFlop, ~300kW • 2003 – Dell Linux cluster, 5 TFlops, ~300 kW • 2006 – Dell Linux blade cluster, 62 TFlops ~500 kW, 16 kW per rack • 2008 – Sun Linux blade cluster, Ranger, 579 TFlops, 2.4 MW, 30kW per rack • 2011 – Dell Linux blade cluster, Lonestar 4, 302 Tflops, 800 kW, 20kW per rack

  4. TACC Data Centers • Commons Center (CMS) • Originally built in 1986 with 3,200 sq. ft. • Designed to house large Cray systems • Retrofitted multiple times to increase power/cooling infrastructure, ~1 MW total power • 18” raised floor, standard CRAC cooling units • Research Office Complex (ROC) • Built in 2007 as part of new office building • 6,400 sq.ft., 1 MW original designed power • Refitted to support 4 MW total power for Ranger • 30” raised floor, CRAC and APC In-Row Coolers

  5. CMS Data Center Previously

  6. CMS Data Center Now

  7. Lonestar 4 Dell Intel 64-bit Xeon Linux Cluster22,656 CPU cores (302 TFlops) 44 TB memory, 1.8 PB disk

  8. Lonestar 4 Front Row

  9. Lonestar 4 End of Rows

  10. Lonestar 4 Electrical Panels

  11. ROC Data Center Houses Ranger,Longhorn, Corral, and other support systems Built in 2007 and already nearing capacity

  12. Ranger

  13. Data Center of the Future • Exploring flexible and efficient data center designs • Planning for 50 kW per rack, 10 MW total system power in the near future • Prefer 480V power distribution to racks • Exotic cooling ideas not excluded • Thermal storage tanks • Immersion cooling

  14. Immersive Cooling – Green Revolution Cooling Servers suspended in mineral oil Improves heat transfer and more efficient “transport” of heat than air Requires refit of servers to remove fans

  15. Summary • Data center/rack power densities increasing • Efficiency of delivering power and cooling the heat generated becoming substantial • Air cooling reaching limits of cooling capability • Future data centers will require more “exotic” or customized cooling solutions for very high power density

More Related