1 / 51

CIT 470: Advanced Network and System Administration

CIT 470: Advanced Network and System Administration. Data Centers. Topics. Data Center : A facility for housing a large amount of computer or communications equipment. Racks Power PUE Cooling Containers Economics. Google DC in The Dalles.

Download Presentation

CIT 470: Advanced Network and System Administration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CIT 470: Advanced Network and System Administration Data Centers CIT 470: Advanced Network and System Administration

  2. Topics Data Center: A facility for housing a large amount of computer or communications equipment. • Racks • Power • PUE • Cooling • Containers • Economics

  3. Google DC in The Dalles Located near 3.1GW hydroelectric power station on Columbia River

  4. Google DC in The Dalles

  5. Inside a Data Center

  6. Inside a Container Data Center

  7. Data Center is composed of: • A physically safe and secure space • Racks that hold computer, network, and storage devices • Electric power sufficient to operate the installed devices • Cooling to keep the devices within their operating temperature ranges • Network connectivity throughout the data center and to places beyond

  8. Data Center Components

  9. Data Center Tiers See http://uptimeinstitute.org/ for more details about tiers.

  10. Racks: The Skeleton of the DC • 19” rack standard • EIA-310D • Other standard numbers. • NEBS 21” racks • Telecom equipment. • 2-post or 4-post • Air circulation (fans) • Cable management • Doors or open

  11. Rack Units

  12. Rack Sizes http://www.gtweb.net/rackframe.html

  13. Rack Purposes Organize equipment • Increase density with vertical stacking. Cooling • Internal airflow in rack cools servers. • Data center airflow determined by arrangement of racks. Wiring Organization • Cable guides keep cables within racks.

  14. Rack Power Infrastructure • Different power sockets can be on different circuits. • Individual outlet control (power cycle.) • Current monitoring and alarms. • Network managed (web or SNMP.)

  15. Rack-Mount Servers 1U 4U

  16. Blade Servers

  17. Buying a Rack Buy the right size • Space for servers. • power, patch panels, etc. Be sure it fits your servers. • Appropriate mounting rails. • Shelves for non-rack servers. Environment options • Locking front and back doors • Sufficient power and cooling. • Power/environment monitors. • Console if needed.

  18. Space Aisles Wide enough to move equipment. Separate hot and cold aisles. Hot spots Result from poor air flow. Servers can overheat when average room temperature is too low. Work space A place for SAs to work on servers. Desk space, tools, etc. Capacity Room to grow.

  19. Data Center Power Distribution http://www.42u.com/power/data-center-power.htm

  20. UPS (Uninterruptible Power Supply) Provides emergency power when utility fails • Most use batteries to store power Conditions power, removing voltage spikes

  21. Standby UPS • Power will be briefly interrupted during switch • Computers may lockup/reboot during interruption • No power conditioning • Short battery life • Very inexpensive http://myuninterruptiblepowersupply.com/toplogy.htm

  22. Online UPS • AC -> DC -> AC conversion design • True uninterrupted power without switching • Extremely good power conditioning • Longer battery life • Higher price http://myuninterruptiblepowersupply.com/toplogy.htm

  23. Power Distribution Unit (PDU) Takes high voltage feed and divides into many 110/120 V circuits that feed servers. • Similar to breaker panel in a house.

  24. Estimating Per-Rack Power

  25. The Power Problem • 4-year power cost = server purchase price. • Upgrades may have to wait for electricity. • Power is a major data center cost • $5.8 billion for server power in 2005. • $3.5 billion for server cooling in 2005. • $20.5 billion for purchasing hardware in 2005.

  26. Measuring Power Efficiency PUE is ratio of total building power to IT power; efficiency of datacenter building infrastructure SPUE is ratio of total server input to its useful power, where useful power is power consumed by CPU, DRAM, disk, motherboard, etc. Excludes losses due to power supplies, fans, etc. Computation efficiency depends on software and workload and measures useful work done per watt

  27. Power Usage Effectiveness (PUE) PUE = Data center power / Computer power • PUE=2 indicates that for each watt of power used to power IT equipment, one watt used for HVAC, power distribution, etc. • Decreases towards 1 as DC is more efficient. PUE variation • Industry average > 2 • Microsoft = 1.22 • Google = 1.19

  28. Data Center Energy Usage

  29. Sources of Efficiency Losses UPS • 88-94% efficiency • Less if lightly loaded PDU voltage transformation • .5% or less Cables from PDU to racks • 1-3% depending on distance and cable type Computer Room Air Conditioning (CRAC) • Delivery of cool air over long distances uses fan power and increases air temperature

  30. Cooling a Data Center • Keep temperatures within 18-27◦C • Cooling equipment rated in BTUs • 1 Watt = 3412 BTUH • BTUH = British Thermal Unit / Hour • Keep humidity between 30-55% • High = condensation • Low = static shock • Avoid hot/cold spots • Can produce condensation

  31. Computer Room Air Conditioning • Large scale, highly reliable air conditioning units from companies like Liebert. • Cooling capacity measured in tons.

  32. Waterworks for Data Center

  33. Estimating Heat Load

  34. Hot-Cold Aisle Architecture • Server air intake from cold aisles • Server air exhaust into hot aisles • Improve efficiency by reducing mixture of hot/cold

  35. Free Cooling • Cooling towers dissipate heat by evaporating water, reducing or eliminating need to run chillers • Google Belgium DC uses 100% free cooling

  36. Improving Cooling Efficiency Air flow handling: Hot air exhausted by servers does not mix with cold air, and path to cooling coil is very short so little energy spent moving Elevated cold aisle temperatures: Cold aisle of containers kept at 27◦C rather than 18-20◦C. Use of free cooling: In moderate climates, cooling towers can eliminate majority of chiller runtime.

  37. Server PUE (SPUE) Primary sources of inefficiency • Power Supply Unit (PSU) (70-75% efficiency) • Voltage Regulator Modules (VRMs) • Can lose more than 30% power in conversion losses • Cooling fans • Software can reduce fan RPM when not needed SPUE ratios of 1.6-1.8 are common today

  38. Power Supply Unit Efficiency 80 PLUS initiative to promote PSU efficiency • 80+% efficiency at 20%, 50%, 100% of rated load • Can be less than 80% efficient at idle power load First 80 PLUS PSU shipped in 2005

  39. Server Useful Power Consumption The best method to determine power usage is to measure it https://www.wattsupmeters.com/

  40. Server Utilization ~10-50% It is surprisingly hardto achieve high levelsof utilization of typical servers (and your homePC is even worse) “The Case for Energy-Proportional Computing,” Luiz André Barroso, UrsHölzle, IEEE Computer, December 2007 Figure 1. Average CPU utilization of more than 5,000 servers during a six-month period. Servers are rarely completely idle and seldom operate near their maximum utilization, instead operating most of the time at between 10 and 50 percent of their maximum

  41. Server Power Usage Range: 50-100% “The Case for Energy-Proportional Computing,” Luiz André Barroso, UrsHölzle, IEEE Computer, December 2007 Energy efficiency = Utilization/Power Figure 2. Server power usage and energy efficiency at varying utilization levels, from idle to peak performance. Even an energy-efficient server still consumes about half its full power when doing virtually no work.

  42. Server Utilization vs. Latency Latency Utilization 100%

  43. Improving Power Efficiency

  44. Improving Power Efficiency Application consolidation • Reduce the number of applications by eliminating old applications in favor of new ones that can server the purpose of multiple old ones. • Allows elimination of old app servers. Server consolidation • Use single DB for multiple applications. • Move light services like NTP onto shared boxes. Use SAN storage • Local disks typically highly underused • Use SAN so servers share single storage pool

  45. Improving Power Efficiency Virtualization • Host services on VMs instead of on physical servers • Host multiple virtual servers on single physical svr Only-as-needed Servers • Power down servers when not in use • Works best with cloud computing Granular capacity planning • Measure computing needs carefully • Buy minimal CPU, RAM, disk configuration based on your capacity measurements and forecasts

  46. Containers Data Center in a shipping container. • 4-10X normal data center density. • 1000s of servers. • 100s of kW of power. Advantages • Efficient cooling • High server density • Rapid deployment • Scalability Vendor offerings: http://www.datacentermap.com/blog/datacenter-container-55.html

  47. Microsoft Chicago Data Center

  48. Google Container Patents Containers docked at central power spline Container air flow diagram, with a center cold aisle and hot air return behind servers Vertical stack of containers

  49. Data Center Failure Events

  50. Key Points Data center components • Physically secure space • Racks, the DC skeleton • Power, including UPS and PDU • Cooling • Networking Power efficiency (server cost = 4 years power on avg) • PUE = Data center power / IT equipment power • Most power in traditional DC goes to cooling, UPS • SPUE = Server PUE; inefficiencies from PSU, VRM, fans Cooling • Heat load estimation • Air flow control (hot/cold aisle architecture or containers) • Higher cold air temperatures (27C vs. 20C) • Free cooling (cooling towers) TCO = DC depr + DC opex + Svr depr + Svr opex

More Related