1 / 26

The CDCE Project @ BNL

The CDCE Project @ BNL. HEPIX – LBL October 28, 2009 Tony Chan - BNL. Background. Rapid growth in the last few years caused space, power and cooling problems Increasing capacity for RHIC/ATLAS and other activities cannot be accommodated with current facility infrastructure

bayard
Download Presentation

The CDCE Project @ BNL

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The CDCE Project @ BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL

  2. Background • Rapid growth in the last few years caused space, power and cooling problems • Increasing capacity for RHIC/ATLAS and other activities cannot be accommodated with current facility infrastructure • Search for additional data center space began in 2007 • Update of talk originally given at HEPIX in St. Louis (Nov. 2007)

  3. Vital Statistics • Currently housing 165 racks of equipment (disk storage, cpu, network, etc) + 9 robotic silos • Approximately 35 PB of tape storage, 9 PB of disk storage capacity and 10,200 computing cores • Average power usage ~ 650 kW (~60% of maximum UPS capacity) with peak load ~ 790 kW • Cooling capacity for a maximum of ~1000 kW

  4. The Growth of Computing

  5. Total Distributed Storage Capacity

  6. Evolution of Space Usage Intel dual and quad-core deployed Capacity of old data center

  7. Evolution of Power Usage Existing UPS Capacity

  8. The Search For Solutions (1) • Engaged Lab management – Spring 2007 • Discussion on possible solutions – Summer 2007 • Cost • Time • Location • Recommendation to Lab management – Fall 2007/Winter 2008 • Two-phase solution to meet cost and time constraints • Identify and renovate existing space to meet short-term requirements • New building to meet long-term requirements • Funding for two construction projects approved – Spring 2008

  9. The Search for Solutions (2) • Renovate existing floor space (US $0.6 million) • Tender award – April 2008 • Renovations begin – June 2008 • Renovations end -- October 2008 • Occupancy – November 2008 • New building (US $5 million) • Finalize design – May 2008 • Tender award – June 2008 • Construction starts – August 2008 • Construction ends – August 2009 • Occupancy – October 2009 • From first proposal to occupancy took 2½ years

  10. Facility Development Timeline • Recent Past (2006-2007) • More efficient use of facility resources • Supplemental cooling system in existing facility • Near-Term (2008 to present) • Renovation of (2000 ft2) 185 m2 of unused floor with 300 kW of power • New building with (6600 ft2) 622 m2 and 1.0 MW of power • Mission-specific facility (redundant cooling, deep raised floors, etc) • Room for ~ 150 racks and 7 robotic silos • Long-Term (2017 and beyond) • New BNL data center with 25000 ft2 (2300 m2) after 2018

  11. Rack-Top Cooling Units

  12. Rear Door Heat Exchanger

  13. Data Center Layout in 2007

  14. Data Center Layout in 2009

  15. DATA CENTER EXPANSION – PART 1 September 12, 2008

  16. Data Center Expansion – Part 1 July 15, 2009

  17. Data Center Expansion – Part 2 January 14, 2009

  18. Data Center Expansion – Part 2 October 10, 2009

  19. Data Center Expansion – Part 2 October 10, 2009

  20. Data Center Expansion – Part 2 October 10, 2009

  21. New Data Center Layout

  22. Where We Are Today (1) • Data Center Expansion (part 1) is similar to existing facility • 12-in (30.48 cm) raised floor • Redundant cooling capacity • No support for racks > 10 kW • No support for supplemental cooling • Cable trays for power and network • Data Center Expansion (part 2) was designed for high density equipment • 30-in (76.2 cm) raised floor • Support for racks > 10 kW (blades, half-depths, etc) • Redundant cooling capacity • Support for racks > 2,500 lbs (1,135 kg) • 13-ft ceiling (4 m) for high-profile racks • Cable trays for power and network • Support for supplemental cooling • Environmentally-friendly building

  23. Where We Are Today (2) • Facility expanded from 5000 ft2 (465 m2) to 13,600 ft2 (1260 m2) of floor space • Equipment capacity • from ~150 to ~300 racks • from 6 to 13 robotic silos • Infrastructure support • from ~1 to ~2.0 MW of UPS-backed power (up to 4 MW capacity) • Cooling capacity grew from ~1 to ~ 2 MW (up to 4 MW capacity) • 3 robotic silos and 6 racks of worker nodes first occupants of CDCE (October 2009) • Is this sufficient until 2018?

  24. Unresolved Issues • Insufficient funds to: • add 2nd flywheel UPS for CDCE • diesel generator to support additionalflywheel UPS units • install additional 1 MW of cooling capacity (equipment already purchased) • Estimated cost is additional US $2-3 million • Estimate CDCE will exceed 2 MW of UPS power and cooling by 2012 • Lead time to approve funds and pre-installation is 12 months  decision by 2011

  25. Reason for Optimism? • Multi-core cpu’s and increasing storage density have helped restrain a feared unsustainable growth in power and space needs • Rack counts have not increased at the same rate as computing and storage deployments • Somewhat hopeful that continued technological gains will further restrain data center growth

  26. Summary • Facility footprint nearly tripled since 2007 • Applied lessons learned in design of data center expansion (part 2) • Must increase cooling efficiency with new technologies • Rack-top cooling units • Rear-door heat exchanger • Hot aisle containment • Significant increases in power efficiency and technology (power supply, multi-core cpu, etc) is a positive development, but some unresolved issues remain

More Related