1 / 24

Agenda

Motivations Energy-Aware Resource Allocation Framework Process, Infrastructure, and Control Layers Preliminary results Conclusions and Future Work. Agenda. Energy Management in Service Centers. Energy consumption 2% of CO2 emission By 2012 energy costs will be 40% of TCO

Download Presentation

Agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Motivations Energy-Aware Resource Allocation Framework Process, Infrastructure, and Control Layers Preliminary results Conclusions and Future Work Agenda

  2. Energy Management in Service Centers • Energy consumption • 2% of CO2 emission • By 2012 energy costs will be 40% of TCO • Related costs: cooling, UPS, … • QoS guarantees and workload variability • Dynamic resource managment

  3. Motivations Current work in sustainable and energy aware computing suggests to provide services with a trade-off between performance and energy consumption Service centers energy efficiency efforts Servers consolidation Servers virtualization Storage remains a gaping hole in the enterprise service center: the same principles that govern server energy savings should be applied to the storage sub-system as well

  4. Our work Develop novel energy-aware resource allocation mechanisms and policies for SOA, and business process-based applications via an interdisciplinary approach Goal: provide services with QoS guarantees, while minimizing the energy consumption of the computing infrastructure

  5. Active Energy Aware Framework

  6. Active Energy Aware Framework

  7. Process Layer • Manages business process end user applications • In advanced SOA systems, complex applications are described as abstract business processes which are executed by invoking a number of available Web services • End users can specify different preferences and constraints and service selection can be performed by dynamically identifying the best set of services available at run time • Web service components characterized by QoS profiles and energy cost • Maximization of the QoS for the end User

  8. Process Layer • Web service selection results in an optimization problem whose goal is to optimize a single process instance • Performance issues are usually not considered and energy consumption has always been neglected • QoS optimization does not analyze the process efficiency in terms of accesses and management of business objects • Data deduplication techniques can be applied in order to identify and merge different copies of the same object • Green IT calls for a new approach to data management

  9. Process Layer • In [1], we have proposed an optimization technique for QoS maximization based on mixed integer linear programming • Approach demonstrated to be efficient under stringent constraints and for large processes instances • In current work we are extending the solution in order to include explicitly energy issues and object replica management in the QoS evaluation

  10. Infrastructure Layer • Focuses on workload variations and on the trade-off between the performance of Web service components and energy consumption • Web service components invoked by business processes are mapped to multi-tier server applications which are currently executed by independent Virtual Machines • Each VM is usually dedicated to serve a single application

  11. Infrastructure Layer • Autonomic self-managing techniques are currently implemented by network controllers which can establish • Application placement: The set of applications (VMs) executed by each server • Load balancing: The request volumes at various servers • Capacity Allocation: The capacity devoted to the execution of each application (VM) at each server • Server provisioning: Decide to turn on or off servers depending on the system load • Frequency scaling: Reduce the frequency of operation of servers • Goal: maximize the SLA profits (including revenues and penalties), while balancing the cost of using resources (including energy and air conditioning)

  12. Infrastructure Layer • In [2] we have designed resource allocation techniques for the management of multi-tier virtualized systems • Allocation policies provide a joint solution to the server provisioning, frequency scaling, VMs placement, load balancing and capacity allocations problems • The joint problem has been formalized as a mixed integer non linear programming problem • The problem is NP-hard and the inclusion of energy costs in the objective function keeps its solution very challenging • Heuristic approach based on local-search

  13. Infrastructure Layer • Energy efficiency in storage can be achieved by adopting data deduplication also at this layer • Basic idea is to store only data changes on storage devices, while redundant data is replaced with a pointer to the unique data copy • Data deduplication can be also applied for archiving purposes focusing on high level application requirements • Loss of information can be avoided by detecting and preserving important objects • Data quality techniques for the identification of the only relevant copy to be preserved

  14. Control layer • Differentiation between the Infrastructure and the Control layers characterized by different time scales: • Server provisioning and VM placement decisions taken about every half an hour • Load balancing, capacity allocation, and frequency scaling problems imply a relatively low computation overhead • Infrastructure models based on the assumption that the overall system is at a steady state and cannot accurately model system transients

  15. Control layer • Aims at tackling workload variations and adjusting the system configuration within a very short time frame (e.g., every minute) • Adoption of dynamical models which can accurately represent system transients under varying workload conditions and genuine control-theoretic approaches for the design of server controllers • Control layer is viewed as a feedback loop, where the SLA objectives are translated into set-points for the response time of the servers and tracking performance is traded-off with energy savings

  16. Control layer • In [14] we identified a control-oriented dynamic model of an application server based on the Linear Parameter Varying (LPV) framework • LPV models are able of capturing system behavior at a very fine-grained time resolution, with an accuracy suitable for control purposes

  17. Need for an Integrated Approach

  18. Infrastructure Layer Preliminary Results

  19. Infrastructure Layer Preliminary Results

  20. Control Layer Preliminary Results

  21. Conclusions and Future Work • Climate debate and sustainable growth concern over energy use will strive green computing in the Service area research agenda • We have provided solutions able to determine QoS and energy trade-off at the individual layers of our framework • Ongoing work is focusing on the analysis of the different time scales and the interrelations which characterize the resource managers working at the different layers • Exploit information from the lower layers to quantitatively estimate the energy consumption required for business processes and component Web services execution

  22. References • [1] D. Ardagna and B. Pernici. Adaptive Service Composition in Flexible Processes. IEEE Transactions on Software Engineering, 33(6):369–384, June 2007 • [2] D. Ardagna, M. Trubian, and L. Zhang. Energy-Aware Autonomic Resource Allocation in Multi-tier Virtualized Environments. Politecnico di Milano, Dipartimento di Elettronica e Informazione Technical report number 2008.13, July 2008 • [3] D. Ardagna, M. Trubian, and L. Zhang. SLA based resource allocation policies in autonomic environments. Journal of Parallel and Distributed Computing, 67(3):259–270, 2007 • [14] M. Tanelli, D. Ardagna, and M. Lovera. LPV model identification for power management of web services. In IEEE Multi-conference on Systems and Control, 2008

More Related