1 / 20

Reducing Data Center Energy Costs

Presenter: David Rosicke. Reducing Data Center Energy Costs. Manager, Network Engineering at Connecticut Children’s Medical Center. CT Children’s Medical Center.

graceland
Download Presentation

Reducing Data Center Energy Costs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presenter: David Rosicke Reducing Data Center Energy Costs Manager, Network Engineering at Connecticut Children’s Medical Center

  2. CT Children’s Medical Center • We are one of only two independent Children’s Hospitals in New England. One of the 59 independent Children’s Hospitals in the US. • We are a 110 bed, eight story, 351,000 square foot facility with 15 remote sites including a School, Ambulatory Surgery Center, and Center for Motion Analysis. • We handle over 300,000 patient visits and over 10,000 surgeries per year. • We manage Neonatal Intensive Care Units at three other Hospitals in Connecticut. • Originally founded in 1898 in Newington, Connecticut, and moved to a brand-new facility in 1996. • We are making children in Connecticut the healthiest in the country.

  3. Common DC Cooling issues • Common problems seen in raised floor DC’s: • Cutting excessively large floor tiles for cable access and not putting airflow clamps/brush kits. • Removing the rubber tile gaskets. • Placing cabinets too close to cooling units. • Facing racks and/or cooling equipment the wrong way. • Most overlooked and easiest to do: • Blocking plates in server cabinets • Solid end-cabinet sides – or cover vented sides. • Worst to fix after the fact: • Under-floor cabling that’s not in cable trays or troughs. • Server cabinets with hot air venting to intake of next row.

  4. Cooling Design and Airflow • Proper airflow techniques can greatly improve energy efficiency for any data center not incorporating them. Even a shelf of servers in a basement with a window unit in the door can be cooled more efficiently if the airflow is correct and the window unit is high up, not at chest height. • We stood up our initial Data Center (including SAN and 2x10KVa UPS) in one rack of a network IDF with two 1.5 ton cooling units and lots of cardboard ductwork. We never went above 74 degrees, and that was with one unit cycling on and off. • Cardboard works, if you don’t have funding for Subzero™ inserts. You can always paint it black!

  5. CT Children’s Data Centers • Legacy Data Center (1996) • 453 square foot on slab in basement. • Eight cabinets, 1 shelving unit, 2 open-frame racks, one four–cabinet ROLM 9751 (Siemens MOD 70). • Building and suspended independent 5 ton cooling. • One Telecom guy with his equipment storage. • New Data Center (2011) • 1634 square foot, on-slab renovation of old HP/DEC lab. • 44 total cabinets, 37 were donated Wright Line APW’s, four Ortronics DATACAB cabinets, and three 3Par SAN array cabinets. • Six 10 ton in-row Liebert units. Scroll compressor, variable fan units. • UPS/Transformer room has suspended Glycol loop cooling with DX backup. • DR site (2012) • CO-LO with ChimeNet in Wallingford, CT • Three cabinets of leased APC Infrastructure, one SAN array cabinet. • APC Infrastructure cooling provided.

  6. Data Center Power Design • Most people scale their power and cooling design too high. If you design for growth, use systems that scale along the way without a retrofit. • Scroll compressor cooling • External air inlets where possible • Variable speed fans can be retrofitted • Multiple smaller units for targeted cooling, including units that cool upon exhaust • Constantly review your existing design to see if changes will improve efficiency. • Single UPS design (N+1, not N+N) is usually sufficient if the system is designed properly. The key is to NOT rely on the UPS for 100% of your power source. • Run as few stepdown transformers as possible – less loss, and less heat. • Server options for efficiency: • Virtualization and add lots of RAM (Virtual or Physical) to reduce disk requests. • Local SSD can also help reduce power/heat and increase performance. • High efficiency power supplies. • Migrate to 208V power. Running 208V can remove step-down transformers, reducing loss through efficiency and reducing cooling requirements. • Higher voltage reduces wire size as well. Doubling the voltage doubles the wattage over the same diameter wire. The same rule can be used to reduce your wire cost.

  7. Primary DC: Efficient New Design • 298KW total design capacity. 6KW nominal per cabinet. • Actual Generator measured load is 28KW. • Hot isle containment includes tops of cabinets. Cold isle temp is maintained at 72 +/- 4 degrees. • One 300KW CAT Flywheel UPS with a 600KW Diesel CAT. • Supports the DC, NOC, and some building systems. • 44 cabinets in total including SAN. • Current load is 14KW on UPS, 19KW equipment load. • Note: Donated cabinets were modified by us. We reversed the back rails to provide adequate cable management and mounted the PDU’s via hinge screws. Rails are set to 24”, with two cabinets set at 25” for special needs. • Each cabinet has two L6-30 metered PDU’s. • One is Street/Generator, the other is UPS/Generator. • Every other cabinet has a 20A 120V Automatic Transfer Switch for single power supply devices or 120V only devices.

  8. Primary Data Center Picture Hot Isle Storage Rows Hot Isle Server Rows

  9. Ct Children’s Server Infrastructure • 83 Physical servers in all three Data Centers, along with 21 desktops and tower servers. Under 20 total servers in other buildings and departments. • 23 VmWare 5.0 Hosts in all three Data Centers with approximately 430 guests. • Four IBM e-Series hosts with 12 LPARs each. • Two 3Par V400 SAN arrays with 87T each. 3T SSD at primary Data Center. 56T/52T allocated, thin on thin. • Two 5T Netgear NAS with a 3T Equilogic array for a Cardiology System. NAS @ Primary DC, Equilogic @ Legacy Data Center. All repurposed from another project that has been Virtualized.

  10. L-2 Network Decision Matrix

  11. Network Power Consumption Previous slide highlights several items. • As manufacturers produce new equipment, they generally get more energy efficient, even with the addition of POE+. • More features don’t always translate to increased heat load. • Most Data Center L-2 switches don’t need the extra features – just speed and QOS. • Switch decisions should be based on reviews and environmental concerns such as front-to-back or back-to-front airflow. • Pay attention to noise in both Data Centers and user locations. Some model switches are so noisy, we could not put them in clinics! • Pay attention to rated consumption! • A 4.5A switch might only consume 130W. • Same switch with POE enabled – 340W. • That is with no POE load on switch.

  12. Networking Infrastructure • Legacy Data Center • Avaya 5698 IST Cluster with Equilogic iSCSI @ 10 Gig. • It also holds our new Avaya (Red) Telephony Infrastructure. • Each off-site Data Center • 10Gbit Brocade MRP ring. • 8Gbit Brocade FC switches supporting the 3Par SAN. • All DC’s connected via 10Gbit managed service. • Hospital : 45 Avaya 5XXX series 48 and 96 port edge switches. • Hospital Core is Avaya 8600 IST cluster. • 15 remote sites • Mostly with Avaya 5XXX switches. • Some Cisco 3560’s. 65 total switches in production. • Around 310 Juniper Access Points in Local Switching mode running WPA2 AES 802.1X supporting 5GHz N, 2.4GHz B/G (N disabled)

  13. Legacy DC – Case Study • 1996 • Telecom MDF originally with MGE VRLA 80KW UPS and ROLM 9751. • One Network rack, three shelving units for computers. 2 KW load. • 1996-2010 – Temp set @ 72 degrees. • Added secondary 5 ton independent cooling. • Second network rack and Converted to rack-mount servers. • Installed 80KW Flywheel UPS outside of room (reduced heat load). • End of 2011 – Temp set @ 72, maintains 74 degrees, 6KW. • New Cardiology system with iSCSI NAS and iSCSI EquilogicSAN. • Added five cabinets for new Phone system, additional Networking equipment. • Summer 2012 – Building cooling unit down for 5 days • Temperature spiked to 97 degrees. • Facilities provided a portable cooling unit with floor fans - temperature dropped to 88 . • I installed blocking plates in each cabinet, removed the fans, and stabilized the room at 76 degrees utilizing 5 ton and portable cooling only. • 2014 – 5 ton unit down for 6 hours overnight due to condensate line plugging and room stayed at 86 degrees, no additional cooling required. • Today: Room maintains 72 degrees with building and 5 ton unit – 9KW.

  14. Legacy Data Center Telecom guy’s office is behind the ROLM cabinets

  15. The Case for Virtualization

  16. Remote Connectivity – Cost Savings • We have 15 remote sites ranging from houses with six people to full-on multi-story ambulatory surgical centers. • Primary path is via Fibertech Lit Services 1 GB ring. • Secondary path is via Internet connected Aruba RAP-5 units. • OSPF is our routing protocol for redundancy/failover. • Larger sites are modeled after the main Hospital. Avaya or Cisco L3 switching with Juniper wireless. • Smaller sites have L-3 switching too, but if < 1500 square feet, we enable 5GHz wireless on the Aruba RAP. • Remote office practices rely on a single RAP-5, wireless is provided via the RAP. Local analog is used for phones for E911 purposes. Credit Card machines are IP through the cable modem. Printers and hard-wired PC’s are connected to a port on the back of the RAP as there are four ports available.

  17. Network Environment (IDF/MDF) • What’s the average MFR recommended continuous duty max operating temperature for edge switches? • 110 degrees F. • What’s the average MFR recommended continuous duty max operating temp for rack-mount UPS’? • 82 degrees F. • What’s the recommended battery replacement schedule for UPS’ when at 82 degrees and no automatic testing? • Two years. • What’s the average reduction in life of a UPS’s batteries for every 15 degrees over 77 degrees? • 50% reduction. That means @ 92 degrees continuous duty, you are replacing your batteries every year. • Lesson to learn – Get your UPS’ out of uncooled Network closets!

  18. The case for a centralized UPS system • Average rack-mount 3KVa 0.6 PF line-interactive UPS – • 80 watts to maintain battery charge per unit. • 280 watts consumed when recharging each unit. • Heat load when discharging is 4X maintenance BTU output. So, no building power = increased heat load, and it’s usually when the cooling is down. • UPS maintenance schedule (Batteries and IDF cleaning): • Line-Interactive units serviced every three years. Two years in high-temperature IDF’s. • On-Line double-conversion units – Serviced every four years. • Scenario: 11 units in 9 IDF’s and two in the MDF were due for replacement at an purchase cost of $27,000. Maintenance costs per unit were $2124 on a three year schedule. Each unit had external battery packs due to mandate of 45 min runtime at end of battery life. Only the MDF had cooling. • Direct Replace: $73728, 9 years total life. • $27000 for units in all areas • $23364 for three years maintenance, two services performed. • Central UPS solution: $53654, 12 years total life. • $50,000 for (1) on-line double-conversion UPS on generator power. • IDF’s would be centrally wired to the MDF. • $1827 for four years maintenance, two services performed. • Total power savings 280W maintaining charge, no temperature spikes when discharging.

  19. UPS batteries after two years of 90+ Temps

  20. Questions? My contact information: David Rosicke Manager of Network Engineering CT Children’s Medical Center drosick@connecticutchildrens.org 860-837-5868 http://www.youtube.com/watch?v=CbAFjZ-4ACU

More Related