1 / 19

Performance Evaluation of a Green Scheduling Algorithm for Energy Savings in Cloud Computing

Performance Evaluation of a Green Scheduling Algorithm for Energy Savings in Cloud Computing. Truong Vinh Truong Duy ; Sato, Y.; Inoguchi , Y.; Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on. Outline. Introduction

roger
Download Presentation

Performance Evaluation of a Green Scheduling Algorithm for Energy Savings in Cloud Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Evaluation of a Green Scheduling Algorithmfor Energy Savings in Cloud Computing Truong Vinh Truong Duy; Sato, Y.; Inoguchi, Y.; Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on

  2. Outline Introduction Understanding power consumption The Neural Predictor The Green Scheduling Algorithm Experimental Evaluation Performance Evaluation Conclusion Reference

  3. Introduction Research shows that running a single 300-watt server during a year can cost about $338, and more importantly, can emit as much as 1,300 kg CO2, without mentioning the cooling equipment [2]. In this paper, we aim to design, implement and evaluate a Green Scheduling Algorithm integrating a neural network predictor for optimizing server power consumption in Cloud computing environments by shutting down unused servers.

  4. Introduction the algorithm first estimates required dynamic workload on the servers. Then unnecessary servers are turned off in order to minimize the number of running servers, thus minimizing the energy use at the points of consumption to provide benefits to all other levels.

  5. Understanding power consumption Figure 1. CPU utilization and power consumption.

  6. Understanding power consumption Figure 2. State transition of the Linux machine.

  7. Understanding power consumption Figure 3. State transition of the Windows machine.

  8. System Model Figure 4. The system model.

  9. System Model(cont.) A request from a Cloud user is processed in several steps as follows. • Datacenters register their information to the CIS Registry. • A Cloud user/DCBroker queries the CISRegistryfor the datacenters’ information. • The CISRegistry responds by sending a list of available datacenters to the user. • The user requests for processing elements through virtual machine creation. • The list of available virtual machines is sent back for serving requests from end users to the services hosted by the user.

  10. The Neural Predictor Figure 5. A three-layer network predictor.

  11. The Neural Predictor where Oc is the output of the current node, n is the number of nodes in the previous layer, xc,i is an input to the current node from the previous layer, wc,i is the weight modifying the corresponding connection from xc,i, and bc is the bias.

  12. The Neural Predictor In addition, h(x) is either a sigmoid activation function for hidden layer nodes, or a linear activation function for the output layer nodes.

  13. The Green Scheduling Algorithm Figure 6. Pseudo-code of the algorithm.

  14. Experimental Evaluation Figure 7. The modified communication flow.

  15. Performance Evaluation Figure 8. The NASA and ClarkNet load traces.

  16. Performance Evaluation TABLE 1. Simulation results on NASA with the best of each case displayed in boldface

  17. Conclusion This paper has presented a Green Scheduling Algorithm which makes use of a neural network based predictor for energy savings in Cloud computing. The predictor is exploited to predict future load demand based on collected historical demand.

  18. Reference [1] M. Armbrust et al., “Above the Clouds: A Berkeley View of Cloud computing”, Technical Report No. UCB/EECS-2009-28, University of California at Berkley, 2009. [2] R. Bianchini and R. Rajamony, “Power and energy management for server systems,” IEEE Computer, vol. 37, no. 11, pp. 68–74, 2004. [3] EPA Datacenter Report Congress, http://www.energystar.gov/ia/partners/prod_development/downloads/EPA _Datacenter_Report_Congress_Final1.pdf. [4] Microsoft Environment – The Green Grid Consortium, http://www.microsoft.com/environment/our_commitment/articles/green_grid.aspx.

  19. Thank you!

More Related