1 / 27

Energy Efficient Ethernet An Overview

Energy Efficient Ethernet An Overview . Mike Bennett Lawrence Berkeley National Lab. Agenda. What is Energy Efficient Ethernet (EEE)? How we got this far Progress What’s next? Ethernet Alliance as an incubator for EEE. What is EEE?. An idea

myra
Download Presentation

Energy Efficient Ethernet An Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Energy Efficient EthernetAn Overview Mike BennettLawrence Berkeley National Lab

  2. Agenda • Whatis Energy Efficient Ethernet (EEE)? • How we got this far • Progress • What’s next? • Ethernet Alliance as an incubator for EEE

  3. What is EEE? • An idea • Reduce energy consumption during periods of low link utilization • Low hanging fruit • Minimizes impact on industry • Minimizes impact on standard • An IEEE 802.3 Study Group • Formed in November 2006 to study the idea • Technical and economic feasibility • Compatibility and distinct identity • Broad market potential

  4. Things to be studied • Rapid PHY Selection • Switch to a lower speed (power) PHY during low link utilization • How to minimize transition time? • How to avoid thrashing between speeds? • Interaction with higher-layer protocols • Link utilization on servers • Interaction with control policy • But, how did we get to this point?

  5. The road to EEE • Tutorial given in July 2005 to IEEE 802 • Bruce Nordman and Ken Christensen discuss the problem and possible ways to solve it • They propose the idea of Adaptive Link Rate (ALR) • This includes a control policy • This raises awareness within IEEE 802 • Gauge interest between July ’05 and July ’06 • ALR whitepaper posted on EA site August ’06 • http://www.ethernetalliance.org/technology/white_papers/alr_v10.pdf • CFI team forms inAugust ’06

  6. The CFI – identify the problem • “Big IT” – all electronics • PCs/etc., consumer electronics, telephony • Residential, commercial, industrial • 200 TWh/year • $16 billion/year • Nearly 150 million tonsof CO2 per year • Roughly equivalent to 30 million cars! Numbers represent U.S. only One central baseload power plant (about 7 TWh/yr) PCs etc. are digitally networked now — Consumer Electronics (CE) will be soon

  7. The CFI – more about the problem • “ Little IT” — office equipment, network equipment, servers • 97 TWh/year • 3% of national electricity • 9% of commercial building electricity • Almost $8 billion/year • TWh/year • Network Equipment 13 • Servers 12 • PCs / Workstations 20 • Imaging (Printers, etc.) 15 • Monitors / Displays 22 • UPS / Other 16 • … However • Old data (energy use has risen) • Doesn’t include residential IT or networked CE products Note: Year 2000 data taken from Energy Consumption by Office and Telecommunications Equipment in Commercial Buildings--Volume I: Energy Consumption BaselineRoth et al., 2002 Available at: http://www.eren.doe.gov/buildings/documents

  8. The CFI – even more about the problem Unrestrained IT power consumption could eclipse hardware costs and put great pressure on affordability, data center infrastructure, and the environment. Source: Luiz André Barroso, (Google) “The Price of Performance,” ACM Queue, Vol. 2, No. 7, pp. 48-53, September 2005. (Modified with permission.)

  9. The CFI - motivation to start EEE • Electric Utility Rebates • Appliances … • HVAC systems … • Lighting … • … PC power supplies (2005) • … Server computers (2006) • … Reference: http://www.pge.com/docs/pdfs/biz/rebates/hightech/DataCenters_slides.pdf PG&E (California) provides rebates for more energy-efficient servers

  10. EEE – motivation • April 19, 2006 “Green Grid” formed • “A group of technology industry leaders form The Green Grid to help reduce growing power and cooling demands in enterprise datacenters • Energy Efficiency gets U.S. congressional recognition • July 12, 2006 House Resolution 5646 passes • “To study and promote the use of energy efficient computer servers in the United States” • Green500 Supercomputer list created Nov. ‘06 • To “take energy efficiency into account when ranking supercomputers” • Using Flops/watt (total system power) as the metric

  11. EEE – motivation • Energy Star • Requirements coming in 2009 • “All computers shall reduce their network link speeds during times of low data traffic levels in accordance with any industry standards that provide for quick transitions among link rates” • The market for Energy Efficient Ethernet • Driven by customer’s desire to save energy costs • Ethernet is used in markets where saving energy is crucial • Accelerate deployment for new applications • Enables use of incentives by energy industry • Ultimately these translate to increased demand

  12. The CFI – possible energy savings Results from (rough) measurements— all incremental AC power — measuring 1st order • Typical switch with 24 ports 10/100/1000 Mb/s • Various computer NICs averaged

  13. The CFI – possible solution • Desktop-to-switch links • Are mostly idle • Lots of very low bandwidth “chatter” • High bandwidth needed for bursts • Bursts are often seconds to hours apart • Server links are also often not fully utilized • Higher speed links offer more opportunity to save energy • This is an area where more data is needed • Evidence of low utilization (desktop users) • LAN link utilization is generally in range 1 to 5% [1, 2] • Utilization for “busiest” user in USF was 4% of 100 Mb/s [1] A. Odlyzko, “Data Networks are Lightly Utilized, and Will Stay That Way”, Review of Network Economics, Vol. 2, No. 3, pp. 210-237, September 2003. [2] R. Pang, M. Allman, M. Bennett, J. Lee, V. Paxson, and B. Tierney, “A First Look at Modern Enterprise Traffic,” Proceedings of IMC 2005, October 2005

  14. The CFI – possible solution • Snapshot of a typical 100 Mb FE link • Shows time versus utilization (trace from Portland State Univ.) Typical bursty usage (utilization = 1.0 %)

  15. desktop PC switch link The CFI – possible solution • Control policy determines when to transition data rate • RPS can support many different control policies • Need to consider but not define • Trade-off of energy saved versus packet delay • Energy savings achieved by operating at low data rate • Delay occurs during transition from low to high data rate Must prevent link from thrashing Control policy runs on both ends of the link Control policy can be based on queue length thresholds

  16. The CFI – possible solution Sequence Device A Device B Request inhibit frame transmission Acknowledge inhibit frame transmission change link speed change link speed ~1 ms establish new data rate allow frame transmission allow frame transmission Time

  17. The CFI – possible solution • Snapshot of a typical Ethernet link with simulated RPS Mean packet delay with RPS is 0.67 ms Mean packet delay without RPS is 0.12 ms Utilization-threshold policy is used Switching time: 1ms Data rates: 100 and 10 Mb/s Low/High thresh: 0KB and 32KB

  18. The CFI – Benefits of EEE • 1 Gb/s • Most NICs and most energy to be saved • Substantial benefits for homes and offices • Battery life benefit for notebooks • 10 Gb/s (copper) • Reduces power burden in data centers • Reduces cooling burden in data centers • May increase switch/router port capacity • Generally… • Provides real economic benefit through energy savings

  19. The CFI – potential savings identified Assume 100% adoption (U.S. Only) • Residential • PCs, network equipment, other • 1.73 to 2.60 TWh/year • $139 to $208 million/year • Commercial (Office) • PCs, switches, printers, etc. • 1.47 to 2.21 TWh/year • $118 to $177 million/year • Data Centers • Servers, storage, switches, routers, etc. • 0.53 to 1.05 TWh/year • $42 to $84 million/year These figures do not include savings from cooling/power infrastructure Total: $298 to $469 million/year

  20. EEE - progress • November Plenary Meeting (CFI) • 33 supporters • 15 from EA member companies • 73 attendees • Passed 65 – 2 – 6 • Many EA members in the room • January Interim Meeting • Met for almost 2 days • 27-30 people attended per day • most were EA members • Heard several presentations • Update on control policies and impact to higher layer protocols • Open questions to the study group regarding scope

  21. EEE - progress • January Interim Meeting • Presentations (continued) • “Strawman” proposal for 5 criteria and objectives • Update on candidate protocols • Impact on clause 28 (auto-negotiation) • Defined some terms • Took several straw polls • People took action items • What’s next?

  22. EEE - what’s next? • Preparing for March • Several people have offered to present • More simulations on control policy • Technical feasibility of Rapid PHY selection • Alternatives to Rapid PHY selection • Work related to Clause 28 (auto-negotiation) • A look at RPS for backplanes

  23. EEE - what’s next? • Preparing for March • What we still need • Traces from networks in different markets • Server network utilization • Home network utilization • Consumer electronics network utilization • How to communicate with upper layers? • Don’t want a network melt-down because of RPS • Economic feasibility of building EEE PHYs • Further study on broad market potential • Ultimately preparing for PAR • Where does the Ethernet Alliance fit in this?

  24. EEE/EA – Technology Incubation • How did EEE get this far? • ALR Whitepaper posted on EA site • CFI Team and supporters • Mostly EA members • First study group meeting contributors and participants • Mostly EA members • EA Mission • The mission is to promote industry awareness, acceptance, and advancement of technology and products based on both existing and emerging IEEE 802 Ethernet standards and their management.

  25. EEE/EA – Technology Incubation • Marketing Activities • Proactive cohesive messaging to external target audiences • Press • Most recently interviewed with CMP/Information Week to promote EEE • Arranged by the EA • Planning to form an EEE technical subcommittee • When the time is right • Bottom line • The EA has been there to help us be successful • Membership has its benefits

  26. EEE/EA – summary • EEE is a classic case of technology incubation by the EA • EA support helped to keep EEE moving in the right direction • CFI preparation • EA marketing support is raising industry awareness • Press • The Ethernet Alliance will be there for us • When we need a place for consensus building among members • When the time comes for testing and demonstrating interoperability

  27. Thanks! • Questions or comments? • mjbennett@ieee.org

More Related