150 likes | 276 Views
Power and Cooling at Texas Advanced Computing Center. Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011. TACC Mission & Strategy.
E N D
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42nd HPC User Forum September 8, 2011
TACC Mission & Strategy The mission of the Texas Advanced Computing Center is to enable scientific discovery and enhance society through the application of advanced computing technologies. To accomplish this mission, TACC: • Evaluates, acquires & operatesadvanced computing systems • Provides training, consulting, anddocumentation to users • Collaborates with researchers toapply advanced computing techniques • Conducts research & development toproduce new computational technologies Resources & Services Research & Development
Recent History of Systems at TACC • 2001 – IBM Power4 system, 1 TFlop, ~300kW • 2003 – Dell Linux cluster, 5 TFlops, ~300 kW • 2006 – Dell Linux blade cluster, 62 TFlops ~500 kW, 16 kW per rack • 2008 – Sun Linux blade cluster, Ranger, 579 TFlops, 2.4 MW, 30kW per rack • 2011 – Dell Linux blade cluster, Lonestar 4, 302 Tflops, 800 kW, 20kW per rack
TACC Data Centers • Commons Center (CMS) • Originally built in 1986 with 3,200 sq. ft. • Designed to house large Cray systems • Retrofitted multiple times to increase power/cooling infrastructure, ~1 MW total power • 18” raised floor, standard CRAC cooling units • Research Office Complex (ROC) • Built in 2007 as part of new office building • 6,400 sq.ft., 1 MW original designed power • Refitted to support 4 MW total power for Ranger • 30” raised floor, CRAC and APC In-Row Coolers
Lonestar 4 Dell Intel 64-bit Xeon Linux Cluster22,656 CPU cores (302 TFlops) 44 TB memory, 1.8 PB disk
ROC Data Center Houses Ranger,Longhorn, Corral, and other support systems Built in 2007 and already nearing capacity
Data Center of the Future • Exploring flexible and efficient data center designs • Planning for 50 kW per rack, 10 MW total system power in the near future • Prefer 480V power distribution to racks • Exotic cooling ideas not excluded • Thermal storage tanks • Immersion cooling
Immersive Cooling – Green Revolution Cooling Servers suspended in mineral oil Improves heat transfer and more efficient “transport” of heat than air Requires refit of servers to remove fans
Summary • Data center/rack power densities increasing • Efficiency of delivering power and cooling the heat generated becoming substantial • Air cooling reaching limits of cooling capability • Future data centers will require more “exotic” or customized cooling solutions for very high power density