1 / 28

Ensieea Rizwani

Ensieea Rizwani. An energy-efficient management mechanism for large-scale server clusters By : Zhenghua Xue , Dong, Ma, Fan, Mei.

calla
Download Presentation

Ensieea Rizwani

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EnsieeaRizwani An energy-efficient management mechanism for large-scale server clusters By: ZhenghuaXue, Dong, Ma, Fan, Mei

  2. Most data centers including University at Buffalo’s center of computational research (CCR) resources keep running 365*24 despite the knowledge of the work load or utilization. This results in increase of power consumption and decrease of resource utilization. Energy efficient centers are really important as they vastly contribute financially and technically.

  3. Outline • Introductions • Overview of the Architecture • Power conservation Mechanism • Adaptive pool Mechanism • Simulation and Measurement • Conclusion • Future work

  4. Power equipment, cooling equipment, and electricity together represents a significant portion of a data center’s cost, • Any guess’s for the %? • Cost is up to 63 percent of the total cost of ownership of its physical IT infrastructure.

  5. How to make Data Centers more cost efficient? • For the hardware component level, a general approach is to reduce the power consumed by components not currently in use. Some examples are: • placing the CPU in a “halted” state when there are no runnable tasks • Turning off the hard drive motor or memory device after some period of inactivity • resizing the cache by powering down unused cache lines

  6. Approach taken by this article: • This paper proposes an adaptive pool based resource management (APRM) mechanism to provide computing capacity on-demand. • APRM implements power saving by terminating part of idle nodes and guarantees QoS by reserving some idle nodes • By obtaining load information from the management system, APRM can predict the load amount.

  7. Management System of HPC • Management system of HPC consists of two components: • Job management subsystem • Resource management subsystem

  8. Overview of an extensible cluster management architecture

  9. Job Management System • Job Controller • Executing entity that dispatches jobs, controls their life time by starting a job, suspending or canceling them. • Job Supervisor • Responsible for monitoring job status and reporting that information to queue manager. • Queue Manager • Queuing the jobs in the waiting queue • Updating the queue upon receiving information from job supervisor • Making decision about job scheduling in accordance with scheduling algorithm and available resources • Informing job Controller to execute

  10. Resource Manager • Executor • Dedicated to executing the instructions • Resource Monitor • Concentrates on monitoring and collecting the status information of resources • Statistics Analyzer • Auxiliary component for supporting automatic and intelligent resource management. • Policy Decisioner • Maintains a collection of policies which are triggered by some predefined events. • Energy effective resource management method is kept in the policy decisioner.

  11. Demand fluctuations: • Many studies have shown that demand for high performance scientific computing varies with time. As is studied, job arrivals are expected to have cycles at three levels: • Daily (daily working hours are the peak hours) • Weekly (weekend have the lowest job arrivals) • Yearly ()

  12. Server States

  13. Power Model of Servers • Busy, Idle, Shutdown • Upon completion of all the jobs in a computing node that power state transits from busy to idle. • Once new job arrive at a new computing node, the power state transits from idle to busy. • If a computing node keeps idle for a long time, it will be terminated and the power state transits from Idle to shutdown. • When the workload is becoming heavy, additional computing capacity is expected. Some computing nodes will be wakened up to take part and their status will be transitioned from shutdown to idle and than to busy.

  14. Adaptive Pool Mechanism • A resource pool is a collection of computing nodes offering shared access to computing capacity, and the automation and virtualization capabilities of resource pool promise lower costs of ownership.

  15. Mechanism of APRM • corePoolSize: the number of nodes to keep in the pool, and it is the sum of the numbers of working nodes and idle nodes. • maxPoolSize: the maximum number of nodes to allow in the pool, and it equals to the total number of the nodes in a cluster. • maxIdleNodes: the maximum number of idle nodes to keep in the pool. • keepAliveTime: when the number of idle nodes in the pool is greater than maxIdleNodes, this is the maximum time that excess computing nodes will wait for new jobs before terminating.

  16. Termination Conditions • The idle time of idle nodes is beyond keepAliveTime; • The first condition prevents a computing node from frequently terminating and launching when the computing demand fluctuates in short cycle.

  17. Termination Conditions • The number of the idle nodes in the pool is larger than maxIdleNodes; • The second condition targets at decreasing needless computing nodes to save power.

  18. Termination Conditions • If more than one idle node simultaneously meets the two conditions above, nodes with longer runtime have priority to terminate. • The third condition is to balance the utilization of nodes. After termination of some idle nodes, the number of idle nodes in the pool maintains maxIdlenodes.

  19. APRM APRM implements power saving by terminating part of idle nodes and guarantees QoS by reserving some idle nodes whose number maintains maxIdleNodes. The working parameter maxIdleNodes plays an important role in APRM. If it is set too high, this will lead to excessive provision of computing capacity. However, if too low, the reserved idle nodes may be insufficient to new arrival jobs, and the spare nodes will be wakened to take part in computing with a delay of start-up.

  20. The ratio of run time of all the computing nodes with APRM to that without APRM as the metric for power saving, and it can be denoted as formula

  21. The time between job arrival and completion, averaged over all jobs

  22. The ratio of the response time of a job to the time it requires on a dedicated system, averaged over all jobs

  23. Average frequency of shutdown as a metric to measure whether computing nodes frequently terminate and launch

  24. Simulation Model • Workload generator • Job scheduler • Resource manager

  25. Summary • The difference of average job response time is not more than 1.8701 minutes, and that of average job slow down is not beyond 0.0085. This suggests APRM has little impact on QoSwith significant power saving. • Future Work: • Researching traces • And conclude with better predictive methods

  26. Thank You 

More Related