energy efficiency issues in distributed cyber physical systems n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Energy-efficiency issues in Distributed Cyber-Physical Systems PowerPoint Presentation
Download Presentation
Energy-efficiency issues in Distributed Cyber-Physical Systems

Loading in 2 Seconds...

play fullscreen
1 / 67

Energy-efficiency issues in Distributed Cyber-Physical Systems - PowerPoint PPT Presentation


  • 158 Views
  • Uploaded on

Energy-efficiency issues in Distributed Cyber-Physical Systems . Tridib Mukherjee IMPACT Lab Arizona State University. Energy Management Principles. Reduce Wastage of Energy Holistic Resource Management Decentralized and Localized Algorithms Energy Scavenging Model-based Design.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Energy-efficiency issues in Distributed Cyber-Physical Systems


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
    Presentation Transcript
    1. Energy-efficiency issues in Distributed Cyber-Physical Systems TridibMukherjee IMPACT Lab Arizona State University

    2. Energy Management Principles • Reduce Wastage of Energy • Holistic Resource Management • Decentralized and Localized Algorithms • Energy Scavenging • Model-based Design

    3. Energy Efficiency in Data Centers

    4. Typical layout of a datacenter Temp Heat Recirculation Outlet Outlet Inlet How can resource management decision be aware of the impact on the temperaturedistribution, coolingdemand and computingenergy requirement ?

    5. Coordinated Management Architecture Management Plane Application Synergistic Model-driven Proactive Data Center Management Modeling Plane Workload Model Power Model Thermal Model Cooling Model Ambient Sensor Workload Manager Workload Trace Active Server Pool Server Resource Utilization Power Consumption Temperature Information Data Collection Plane Cooling Manager Platform Power Manager Ethernet Network Plane Server Rack CRAC Physical Plane Data Flow Control Flow Room Roof T. Mukherjee, A. Banerjee, G. Varasamopoulos, and S. K. S. Gupta, ‘Spatio-temporal Thermal-Aware Job Scheduling to Minimize Energy Consumption in Virtualized Heterogeneous Data Centers", Elsevier Journal on Computer Networks (ComNet), , Vol. 53, Issue 17, Pages 2888-2904, December, 2009. Ambient Sensor Networks Raised Floor Data Center

    6. Thermal-aware Job Scheduling Problem PROBLEM: Given a set of incoming jobs, find a job scheduling (i.e. job start times) and placement (i.e. server assignment) to minimize the total data center energy consumption subject to meeting of job deadlines (submitted times for execution) – requires 3D (job x server x time) decision-making. Cooling Energy Supply Temperature Upper Bound Computing Energy Job Migration Overhead Capacity Constraint: server assigned less server available Server Required: Required no. of servers assigned for jobs Deadline Constraint: job finish time less than deadline Arrival Constraint: job start time later than arrival T. Mukherjee, A. Banerjee, G. Varasamopoulos, and S. K. S. Gupta, ‘Spatio-temporal Thermal-Aware Job Scheduling to Minimize Energy Consumption in Virtualized Heterogeneous Data Centers", Elsevier Journal on Computer Networks (ComNet), , Vol. 53, Issue 17, Pages 2888-2904, December, 2009.

    7. Thermal-aware Job Scheduling Algorithms SCINT Algorithm: Heuristic solution (genetic algorithm) • Take a feasible solution and perform mutations until certain number of iterations. • Spreads the jobs over time while meeting the deadline. • Offline in nature requiring the job backlog information • Takes hours of operation. EDF-LRH Algorithm: Tries to mimic the behavior of SCINT by spreading jobs using the Earliest Deadline First (EDF) scheduling approach. • Place jobs to servers contributing the Lowest Recirculated Heat (LRH) • Online in nature maintaining EDF job queues as and when jobs arrive • Takes milliseconds of operation. FCFS Algorithm: Does not conventional temporal scheduling approach but uses thermal-aware job placement techniques for energy-savings. • Place jobs to servers contributing the Lowest Recirculated Heat (LRH) • Online in nature taking milliseconds of operations T. Mukherjee, A. Banerjee, G. Varasamopoulos, and S. K. S. Gupta, ‘Spatio-temporal Thermal-Aware Job Scheduling to Minimize Energy Consumption in Virtualized Heterogeneous Data Centers", Elsevier Journal on Computer Networks (ComNet), , Vol. 53, Issue 17, Pages 2888-2904, December, 2009.

    8. Power Consumption Over Time Jobs get spread over time (i.e. peak utilization is reduced) by SCINT & EDF-LRH

    9. Total Energy Consumption • SCINT saves up to 60% of energy consumption. • EDF-LRH mimics the behavior of SCINT specially for low average data center utilization.

    10. Requires Thermostat to be 18o C Requires Thermostat to be 22o C Job1 Thermostat = 18o C (worst-case settings) Requires Thermostat to be 20o C Integrating Cooling Behavior in Choice of Servers (Spatial Scheduling) Hot server CRAC Requires Thermostat to be 18o C Job2 Non-thermal-aware job management and cooling management Requires Thermostat to be 22o C Requires Thermostat to be 20o C Thermostat = 18o C (worst-case settings) Job1 CRAC Requires Thermostat to be 22o C Requires Thermostat to be 18o C Job2 Independent thermal-aware job management and cooling management Requires Thermostat to be 20o C Thermostat = 20o C (dynamic settings) Job1 CRAC Requires Thermostat to be\22 C Requires Thermostat to be 18o C Requires Thermostat to be 22o C Job2 Coordinated thermal-aware job management and cooling management

    11. Functional Architecture for Coordinated Job and Cooling Management Temporal Job Scheduler Job Management Incoming Jobs Scheduled Job Queue Determine Thermostat Requirement Spatial Job Scheduler A. Banerjee, T. Mukherjee, G. Varasamopoulos, and S. K. S. Gupta, ‘‘Cooling-Aware and Thermal-Aware Workload Placement for Green HPC Data Centers", International Conference on Green Computing Conference (IGCC), Chicago, IL, August 2010. Dynamically Set CRAC Thermostat th th T T low high Highest Thermostat Setting (HTS) Algorithm HTS Server Ranking A. Banerjee, T. Mukherjee, G. Varasamopoulos, and S. K. S. Gupta, ‘’Integrating Cooling Awareness with Thermal Aware Workload Placement for HPC Data Centers ", Elsevier Journal on Computer Networks (ComNet), Special Issue on Virtualized Data Centers, ACCEPTED (2011). CRAC Unit thermostat setting Computing Servers Courtesy: www.liebert.com Data Center

    12. 10 x 10 16 14 12 10 EDFHTS Idle Chassis Turned Off 8 EDFLRH Idle Chassis Turned Off Energy Savings (Joules) FCFSFF Idle Chassis Turned Off 6 EDFHTS EDFLRH 4 FCFSHTS 2 0 5 40 80 Utilization (%) Important Results • EDF-HTS achieve up to 16% energy savings over EDF-LRH • Changing CRAC mode can take time to take effect in room • Delay may be too much since redline temperature can be reached • Choice of algorithm depends on • Maximum energy savings under delay constraint

    13. Preliminary Software Architecture Presentation Scheduling Control Access data from the chassis level sensors Datacenter Servers

    14. Modularized Implementation of Thermal Awareness in Job Scheduling T. Mukherjee, Q. Tang, C. Ziesman, S. K. S. Gupta, and P. Cayton, ''Software Architecture for Dynamic Thermal Management in Datacenters", In the International Conf. Communication System Software Middleware (COMSWARE), Bangalore, India, Jan 2007.

    15. Energy Efficiency in Ad Hoc Networks

    16. Mobile Ad hoc Networks (MANETs) Network Model • mobile nodes (PDAs, laptops etc.) • multi-hop routes between nodes • no fixed infrastructure Applications • Battlefield operations • Disaster Relief • Personal area networking Multi-hop routes generated among nodes Network Characteristics • Dynamic Topology • Constrained resources • battery power B C A C A B D D Links formed and broken with mobility

    17. Reactive • Network divided in small zones. • Intra-region Proactive Routing. • Reactive Inter-region routing. • Balances Proactive & Reactive. • Scalable. • Latency higher than proactive. • Periodically maintainsroutes between every mobile node pair. • Predefined routes available • Low latency • Low scalability. Hybrid Proactive Routing in MANETs Routing • Routes NOT maintained. • Route established only if data to transmit. • High Scalability. • No pre-defined route. • High Latency.

    18. Energy Consumption per bit transmitted Beacon Interval Number of Nodes Average size of beacon msg Bits transmitted due to beaconing per unit time Route Broadcast Interval Bit transmitted per unit time for periodic broadcast Route Maintenance E x N xlogN /β • Overhead • Periodic beacon messages for link state maintenance. • Periodic route update b’cast. • Triggered route update b’cast with each link change. E x N2xlogN /φ E x N2xlogN for each triggered update High Energy Overhead in Maintenance Operations Reduces Applicability Low Scalability Reduce maintenance operations and find optimumβ & φ to minimize energy overhead.

    19. PP+BTP PP+BP PP+BT PP+B Proactive Proactive Protocol Classification • Research Goals: • Developing a PP+B type of protocol maintaining energy-efficient routes. • Uses Self-stabilization from Distributed Computing. • Improves Self-Stabilizing Shortest Path Spanning Tree (SS-SPST) for energy-efficiency. • Analytical Model for determining optimum β & φ for different proactive protocols. Employs Beacons, & Triggered Updates Employs only Beacons Employs Beacons, & Periodic Updates Employs Beacons, Periodic, & Triggered Update WRP, OLSR etc. BFST, SS-SPST etc. FSR, IARP etc. DSDV, TBRPF etc.

    20. Self-stabilization in Distributed Computing Topological Changes and Node Failures for MANETs. Self-stabilizing distributed systems • Guarantee convergence to valid state through local actions in distributed nodes. • Ensure closure to remain in valid state until any fault occurs. Can adaptto topological changes • Is it feasible for routing in MANETs? Fault Closure Invalid State Valid State Convergence Local actions in distributed nodes. Applied to Multicasting in MANETs

    21. Self-stabilizing Multicast for MANETs Multicast source Topological Change • Maintains source-based multi-cast tree. • Actions based on local information in the nodes and neighbors. • Pro-active neighbor monitoring through periodic beacon messages. • Neighbor check at each round (with at least one beacon reception from all the neighbors) • Execute actions only in case of changes in the neighborhood. Convergence Based on Local actions Problem–energy-efficiency is not considered Self-Stabilizing Shortest Path Spanning Tree (SS-SPST)

    22. Energy Aware Self-Stabilizing Protocol (SS-SPST-E) • Actions at each node • (parent selection) • Identify potential parents. • Estimate additional cost after joining potential parent. • Select parent with minimum additional cost. • Change distance to root. Loop Detected E Not in tree F A B D C X AdditionalCost (B → X) = TB + R AdditionalCost (A → X) = TA + 2R Potential Parents of X • Action Triggers • Parent disconnection. • Parent additional cost not minimum. • Change in distance of parent to root. Select Parentwith minimum Additional Cost Minimum overall cost when parent is locally selected Execute action when any action trigger is on • Tree validity– Tree will remainconnected • withno loops.

    23. SS-SPST-E Execution Multicast source • No multicast tree • parent of each node NULL. • hop distance from root of each node infinity. • cost of each nodeis Emax. 2 2 A S B 1 2 2 G 3 1 No potential parents for any node. • First Round – source (root) stabilizes • hop distance of root from itself is 0. • no additionalcost. 1 D C H 2 2 • Second Round – neighbors of root stabilizes • hop distance of root’s neighbors is 1. • parent of root’s neighbors is root. Potential parent forA, B, C, D, F={S}. E F 2 AdditionalCost (F → E) = TF + 2R AdditionalCost (D → E) = TD + 3R AdditionalCost (S → {A, B, C, D}) = Ts + 4R AdditionalCost(D → E) = TD + 3R • And so on …… Potential parent forE={D, F}. AdditionalCost (S → F) = TS + 5R AdditionalCost (C → F) = TC + 3R AdditionalCost (S → F) = Ts + 5R Potential parent forF= {S,C}. • Tolerance to topological changes. • Convergence- From any invalid state the total energy cost of the graph reduces afterevery roundtill all the nodes in the system are stabilized. • Proof - through induction on round #. • Closure:Once all the nodes are stabilized it stays there untilfurther faultsoccur.

    24. Simulation Results

    25. PP+BTP PP+BP PP+BT PP+B Proactive Proactive Protocol Classification • Research Goals: • Developing a PP+B type of protocol maintaining energy-efficient routes. • Uses Self-stabilization from Distributed Computing. • Improves Self-Stabilizing Shortest Path Spanning Tree (SS-SPST) for energy-efficiency. • Analytical Model for determining optimum β & φ for different proactive protocols. Employs Beacons, & Triggered Updates Employs only Beacons Employs Beacons, & Periodic Updates Employs Beacons, Periodic, & Triggered Update WRP, OLSR etc. BFST, SS-SPST etc. FSR, IARP etc. DSDV, TBRPF etc.

    26. Optimum Tuning of Route Maintenance in ad-hoc networks

    27. Proactive PP+BT PP+BTP PP+BP PP+B Employs Beacons, & Triggered Updates Employs Beacons, & Periodic Updates Employs Beacons, Periodic, & Triggered Updates Employs Beacons WRP, OLSR etc. BFST, SS-SPST etc. FSR, IARP etc. DSDV, TBRPF etc. Single variable Equating PDR constraint gives the result Single variable Equating PDR constraint gives the result 1st Derivative Quadraticequation 1st Derivative Quarticequation Optimizations for different Proactive Protocols

    28. Effect of Optimization on DSDV • Balances energy-efficiency and reliability • Incorporates re-activity to traffic intensity in pro-active protocols • Increases protocol scalability

    29. Application-aware Adaptive Optimization Sub-layer

    30. Energy Efficiency in Body Sensor Networks

    31. Typical BAN Workload • Ayushman health monitoring application is considered as the workload • Ayushman has three phases of operation – • Sensing Phase – Sensing of physiological values (Plethysmogram signals) from the sensors and storing it in the local memory • Transmission Phase – Send the stored data to the base station in a single burst • Security Phase – Perform network wide key agreement for secure inter-sensor communication using Physiological value based Key Agreement Scheme (PKA) . • The Security phase occurs once in a day • The Sensing phase and Transmission phase alternate forming a sleep cycle • (the processor can sleep during sensing phase while it can be active during the transmission phase) Ayushman Workload Frequency Throttling during security phase Sensing Phase Enables Sleep Scheduling Transmission Phase Sleep Cycle Sensor CPU Utilization Security Phase Time

    32. Sustainability Analysis Results (Atom) • Four energy scavenging sources were considered • Body Heat, Ambulation, Respiration and Sun Light • The total energy obtained from any combination of scavenging sources can be higher • Higher number of nodes can be sustained J. A. Paradiso and T. Starner. Energy scavenging for mobile and wirelesselectronics. Pervasive Computing, IEEE, 4(1):18–27, Jan.-March 2005.

    33. Vision for BAN Design Architecture BAN Application Analysis & Design Phase Requirements Analysis Design Modeling Phase Thermal Interaction Model Node Power Model Workload Model Available Energy Model Profiling Phase Scavenged Energy Node Power Consumption Physical Properties Node Temperature MAC Level Radio Sleep Scheduling Management Plane Processor Level Power Management In-Network Processing Physical Plane

    34. Research Directions in Energy-efficient Systems Awareness of Scavenging and Energy Storage Awareness of Impact on Green operation and Sustainability Awareness of Workload Sustainability Metrics Holistic Management Algorithms Internet-of-things Experimental Mobile Cloud Benchmark Development Performance Analysis Novel Systems and Platforms Specification Language energy proportional platform Model-based Code optimization Design methodology Safety Tool Development Equipment Longevity Reduce Sustainability Footprint Safe and Sustainable Operations Design space exploration in application domain Evaluate platforms in design space to identify gaps Sustainability under Real-time Requirements

    35. Questions

    36. Backup Slides

    37. Introduction-Motivation Projected Electricity Use of data centers\, 2007 to 2011 • Emergence of cloud based services caused large growth in data centers • High magnitude of data center energy consumption • Internet users’ growth in the world from 2000-2009: 400% [http://www.internetworldstats.com/stats.htm] • Data center energy consumption grew 20-30% annually in 2006 and 2007 [ Uptime Institute research] • Addressing energy saving for Data Centers • Thermal and Cooling awareness to improve energy consumption Future energy use projection - current efficiency trend Historical energy use [Source: EPA] Typical data center energy end use [Source: Department of energy]

    38. Spatial job management (job placement) issues Temporal job scheduling determines the peak computing resource utilization leaving room for thermal-aware task placement Peak air inlet temperaturedetermines upper bound toCRAC temperature setting CRAC temperature settingdetermines it’s efficiency(Coefficient of Performance) Task placement determines temperature distribution Temperature distributiondetermines the equipmentpeak air inlet temperature The lower the peak inlet temperature the higher the CRAC efficiency Coefficient of Performance(source: HP) bottomline There is a task schedule & placement that minimizes the energy (cooling + computing) consumption. Find it!

    39. softwaredimension Application Thermal-aware & cooling-aware data centerjob scheduling Thermal-aware VM (middleware) CPU Load balancing O/S Dynamic voltage scaling Fan speed scaling Dynamic frequency scaling firmware Circuitry redundancy physicaldimension Case/chassis IC room Energy Management Approaches Proactive Solutions Reactive Solutions

    40. HPC resource management model Heat recirculation contribution Computing capabilities of machines Computing power efficiency Number of requests Spatio-temporal Job Scheduling and Power Management job id, job data, required no of servers, expected execution times, server preferences decides on when and in which servers to assign the jobs so that they complete within the expected execution times Time index (every 5 second) Job arrival and execution over time @ ASU HPC data center Job flow Parameters Control data Load Dispatcher On/Off Control Server 1 Server 1 Server 2 Server 3 Server 3 Server N Server N-1 …… Server N-1

    41. Resource Management in Internet data centers Heat recirculation contribution Computing capabilities of machines Computing power efficiency requests per second Number of requests Server Provisioning Tier (Epochs) decides on how many servers required for an epoch Workload Distribution Tier (Slots) Time index (every 5 second) HTTP requests over time, 1998 FIFA World Cup Traffic flow Parameters Control data Load Dispatcher decides on which servers to distribute the workload so that they server utilization do not go beyond a threshold to meet SLAs On/Off Control Server 1 Server 1 Server 2 Server 3 Server 3 Server N Server N-1 …… Server N-1

    42. Temporal Job Management Issues in HPC data centers • Peak utilization to be reduced to leave enough room for thermal-aware job placement • Trade-off with resource utilization • Job execution times are usually overestimated during submission • Jobs can be spread over time to reduce peak utilization • Trade-off with throughput & turn-around time.

    43. system power (P) Ppeak Pidle=b CPU utilization (U) Power Model • Power Consumption is mainly affected by the CPU utilization • Power consumption is linear to the CPU utilization P = aU+ b T. Mukherjee, G. Varasamopoulos, S. K. S. Gupta, and S. Rungta, ‘’Measurement based Power Profiling of Data Center Equipment", Workshop on Green Computing (in conjunction with CLUSTER 2007), Austin, USA, Sept, 2007.

    44. Linear Thermal Model • Heat Recirculation Coefficients • Analytical • Matrix-based • Properties of model • Granularity at air inlets • Assumes steadiness of air flow P = a U + b Tin Tsup D P + × = Q. Tang, T. Mukherjee, S. K. S. Gupta, and P. Cayton, ''Sensor-based Fast Thermal Evaluation Model for Energy-efficient High-performance Datacenters", In the International Conf. Intelligent Sensing Info. Proc. (ICISIP2006), Dec 2006. Max(Tin) <= Tred Tsup <= Tred – Max(DxP) heat distribution powervector inlettemperatures supplied airtemperatures

    45. HPC Workload Model & Conventional Job Scheduling

    46. Temporal Scheduling of Workload: Balancing Utilization Over Time

    47. Choice of Servers (Spatial Scheduling) based on Temperature Job1 Hot server Job2 Non-thermal-aware job management Job1 Job2 Thermal-aware job management

    48. Thermal issues in Data Centers • Heat recirculation • Hot air from the equipment air outlets is fed back to the equipment air inlets • Hot spots • Effect of Heat Recirculation • Areas in the data center with alarmingly high temperature • Consequence • Cooling has to be set very low to have allinlet temperatures in safe operating range • Solution • Jobs to be placed to minimize heat-recirculation Courtesy: Intel Labs

    49. Energy Consumption • Total Power = Computing + Cooling Power • Cooling power depends on the computing power and the COP. • Energy consumption is the total power multiplied by the observed period of time. Ptot = P + Pcooling Ptot = P + P/COP(Tsup) = P + P/COP(Tred – max(D x P)) E = Ptot x time

    50. Instrumentation On-site Set-up Remote Power Meter Reading Chassis NETWORK DualCom Power Meter SNMP Control Power Supply (208 V) T. Mukherjee, G. Varasamopoulos, S. K. S. Gupta, and Sanjay Rungta, ''Measurement based Power Profiling of Data Center Equipment”, In the First International Worshop of Green Computing (in conjunction with CLUSTER 2007), Austin, USA, Sept, 2007