1 / 51

Green Computing Omer Rana o.f.rana@csrdiff.ac.uk

Green Computing Omer Rana o.f.rana@cs.cardiff.ac.uk. The Need. Bill St. Arnaud (CANARIE, Inc). Impact of ICT Industry. Bill St. Arnaud (CANARIE, Inc). Virtualization Techniques. Bill St. Arnaud (CANARIE, Inc). Virtual Machine Monitors (IBM 1960s).

lore
Download Presentation

Green Computing Omer Rana o.f.rana@csrdiff.ac.uk

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Green ComputingOmer Ranao.f.rana@cs.cardiff.ac.uk

  2. The Need Bill St. Arnaud (CANARIE, Inc)

  3. Impact of ICT Industry Bill St. Arnaud (CANARIE, Inc)

  4. Virtualization Techniques Bill St. Arnaud (CANARIE, Inc)

  5. Virtual Machine Monitors (IBM 1960s) A thin software layer that sits between hardware and the operating system— virtualizing and managing all hardware resources App App App App CMS MVS CMS CMS IBM VM/370 IBM Mainframe Ed Bugnion, VMWare

  6. Old idea from the 1960s • IBM VM/370 – A VMM for IBM mainframe • Multiple OS environments on expensive hardware • Desirable when few machine around • Popular research idea in 1960s and 1970s • Entire conferences on virtual machine monitor • Hardware/VMM/OS designed together • Interest died out in the 1980s and 1990s. • Hardware got cheap • Operating systems got more more powerful (e.g multi-user) Ed Bugnion, VMWare

  7. A return to Virtual Machines • Disco: Stanford research project (1996-): • Run commodity OSes on scalable multiprocessors • Focus on high-end: NUMA, MIPS, IRIX • Hardware has changed: • Cheap, diverse, graphical user interface • Designed without virtualization in mind • System Software has changed: • Extremely complex • Advanced networking protocols • But even today : • Not always multi-user • With limitations, incompatibilities, … Ed Bugnion, VMWare

  8. The Problem Today Operating System Intel Architecture Ed Bugnion, VMWare

  9. The VMware Solution Operating System Operating System Intel Architecture Intel Architecture Ed Bugnion, VMWare

  10. VMware™ MultipleWorlds™ Technology A thin software layer that sits between Intel hardware and the operating system— virtualizing and managing all hardware resources App App App App Win 2000 Win NT Win 2000 Linux VMware MultipleWorlds Intel Architecture Ed Bugnion, VMWare

  11. App App App App Win 2000 Win NT Win 2000 Linux VMware MultipleWorlds Intel Architecture MultipleWorlds Technology World A world is an application execution environment with its own operating system Ed Bugnion, VMWare

  12. Virtual Hardware Parallel Ports Serial/Com Ports Monitor (VMM) Floppy Disks Ethernet Keyboard Sound Card Mouse IDE Controller SCSI Controller Ed Bugnion, VMWare

  13. Attributes of MultipleWorlds Technology • Software compatibility • Runs pretty much all software • Low overheads/High performance • Near “raw” machine performance • Complete isolation • Total data isolation between virtual machines • Encapsulation • Virtual machines are not tied to physical machines • Resource management Ed Bugnion, VMWare

  14. Hosted VMware Architecture VMware achieves both near-native execution speed and broad device support by transparently switching* between Host Mode and VMM Mode. Guest OS Applications Guest Operating System Host OS Apps VMware App Virtual Machine Virtual Machine Monitor HostOS VMware Driver PC Hardware NIC Disks Memory CPU *VMware typically switches modes 1000 times per second Ed Bugnion, VMWare

  15. Hosted VMM Architecture • Advantages: • Installs and runs like an application • Portable – host OS does I/O access • Coexists with applications running on the host • Limits: • Subject to Host OS: • Resource management decisions • OS failures • Usenix 2001 paper: J. Sugerman, G. Venkitachalam and B.-H. Lim, “Virtualizing I/O on VMware Workstation’s Hosted Architecture”. Ed Bugnion, VMWare

  16. VMDriver Virtualizing a Network Interface VMApp Guest OS Virtual Network Hub HostOS NIC Driver Virtual Bridge Physical Ethernet VMM NIC Driver PC Hardware Physical NIC Ed Bugnion, VMWare

  17. The rise of data centers • Single place for hosting servers and data • ISP’s now take machines hosted at data centers • Run by large companies – like BT • Manage • Power • Computation + Data • Cooling systems • Systems Admin + Network Admin

  18. Data Centre in Tokyo From: Satoshi Matsuoka

  19. http://www.attokyo.co.jp/eng/facility.html

  20. Martin J. Levy (Tier1 Research) and Josh Snowhorn (Terremark)

  21. Martin J. Levy (Tier1 Research) and Josh Snowhorn (Terremark)

  22. Martin J. Levy (Tier1 Research) and Josh Snowhorn (Terremark)

  23. Requirements • Power an important design constraint: • Electricity costs • Heat dissipation • Two key options in clusters – enable scaling of: • Operating frequency (square relation) • Supply voltage (cubic relation) • Balance QoS requirements – e.g.fraction of workload to process locally – with power management

  24. From: Salim Hariri, Mazin Yousif

  25. From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

  26. Martin J. Levy (Tier1 Research) and Josh Snowhorn (Terremark)

  27. The case for power management in HPC • Power/energy consumption a critical issue • Energy = Heat; Heat dissipation is costly • Limited power supply • Non-trivial amount of money • Consequence • Performance limited by available power • Fewer nodes can operate concurrently • Opportunity: bottlenecks • Bottleneck component limits performance of other components • Reduce power of some components, not overall performance • Today, CPU is: • Major power consumer (~100W), • Rarely bottleneck and • Scalable in power/performance (frequency & voltage) Power/performance “gears”

  28. Is CPU scaling a win? • Two reasons: • Frequency and voltage scaling Performance reduction less than Power reduction • Application throughput Throughput reduction less than Performance reduction • Assumptions • CPU large power consumer • CPU driver • Diminishing throughput gains power (1) CPU power P = ½ CVf2 performance (freq) application throughput (2) performance (freq)

  29. AMD Athlon-64 • x86 ISA • 64-bit technology • Hypertransport technology – fast memory bus • Performance • Slower clock frequency • Shorter pipeline (12 vs. 20) • SPEC2K results • 2GHz AMD-64 is comparable to 2.8GHz P4 • P4 better on average by 10% & 30% (INT & FP) • Frequency and voltage scaling • 2000 – 800 MHz • 1.5 – 1.1 Volts From: Vincent W. Freeh (NCSU)

  30. LMBench results • LMBench • Benchmarking suite • Low-level, micro data • Test each “gear” From: Vincent W. Freeh (NCSU)

  31. Operating system functions From: Vincent W. Freeh (NCSU)

  32. Communication From: Vincent W. Freeh (NCSU)

  33. The problem • Peak power limit, P • Rack power • Room/utility • Heat dissipation • Static solution, number of servers is • N = P/Pmax • Where Pmax maximum power of individual node • Problem • Peak power > average power (Pmax > Paverage) • Does not use all power – N * (Pmax - Paverage) unused • Under performs – performance proportional to N • Power consumption is not predictable From: Vincent W. Freeh (NCSU)

  34. From: Vincent W. Freeh (NCSU) Safe over provisioning in a cluster • Allocate and manage power among M > N nodes • Pick M > N • Eg, M = P/Paverage • MPmax > P • Plimit = P/M • Goal • Use more power, safely under limit • Reduce power (& peak CPU performance) of individual nodes • Increase overall application performance Pmax Pmax power power Paverage Paverage Plimit P(t) P(t) time time

  35. From: Vincent W. Freeh (NCSU) Safe over provisioning in a cluster • Benefits • Less “unused” power/energy • More efficient power use • More performance under same power limitation • Let P be performance • Then more performance means: MP * > NP • Or P */ P > N/M or P */ P > Plimit/Pmax unused energy Pmax Pmax power power Paverage Paverage Plimit P(t) P(t) time time

  36. When is this a win? P */ P < Paverage/Pmax • When P */ P > N/M or P */ P > Plimit/Pmax In words: power reduction more than performance reduction • Two reasons: • Frequency and voltage scaling • Application throughput power (1) P */ P > Paverage/Pmax performance (freq) application throughput (2) performance (freq) From: Vincent W. Freeh (NCSU)

  37. Feedback-directed, adaptive power control • Uses feedback to control power/energy consumption • Given power goal • Monitor energy consumption • Adjust power/performance of CPU • Several policies • Average power • Maximum power • Energy efficiency: select slowest gear (g) such that From: Vincent W. Freeh (NCSU)

  38. A more holistic approach: Managing a Data Center CRAC: Computer Room Air Conditioning units From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

  39. From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

  40. Location of Cooling Unitssix CRAC units are serving 1000 servers, consuming 270 KW of power out of a total capacity of 600 KW http://blogs.zdnet.com/BTL/?p=4022

  41. From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

  42. From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

  43. From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

  44. From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

  45. From: Justin Moore, Ratnesh Sharma, Rocky Shih, Jeff Chase, Chandrakant Patel, Partha Ranganathan (HP Labs)

  46. From: Satoshi Matsuoka

  47. From: Satoshi Matsuoka

  48. From: Satoshi Matsuoka

  49. From: Satoshi Matsuoka

  50. From: Satoshi Matsuoka

More Related