1 / 27

Performance Comparison of Dynamic Voltage Scaling Algorithms for Hard Real-Time Systems

Performance Comparison of Dynamic Voltage Scaling Algorithms for Hard Real-Time Systems. Real-Time and Embedded Technology and Applications Symposium, 2002. Proceedings. Eighth IEEE, pp.219-228. Speaker: Yeu-Shian Lin E-Mail: g946331@oz.nthu.edu.tw. Outline. Introduction

wanda-barry
Download Presentation

Performance Comparison of Dynamic Voltage Scaling Algorithms for Hard Real-Time Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Comparison of Dynamic Voltage Scaling Algorithms forHard Real-Time Systems Real-Time and Embedded Technology and Applications Symposium, 2002. Proceedings. Eighth IEEE, pp.219-228 Speaker: Yeu-Shian Lin E-Mail: g946331@oz.nthu.edu.tw

  2. Outline • Introduction • Classification of DVS algorithms • Intra-task DVS • Inter-task DVS • Target algorithms • Simulation environment • Experimental results • Conclusions

  3. Introduction • In recent years, many dynamic voltage scaling (DVS) have been proposed. • Evaluate quantitatively many recent DVS algorithms under a unified DVS simulation environment called SimDVS. • The performance comparison focus on a preemptive hard real-time systems in which periodic real-time tasks are scheduled.

  4. Classification of DVS algorithms • Intra-task DVS (IntraDVS) algorithms • Adjust the voltage within an individual task boundary • The slack times are used for the current task • Inter-task DVS (InterDVS) algorithms • Determine the voltage on a task-by-task basis at each scheduling point • The slack times are used for the tasks that follow

  5. Intra-task DVS algorithm design factors • Worst case execution time (WCET) • Worst case execution path (WCEP) • Classify into two types depending on: • How to estimate slack times • How to adjust speeds.

  6. IntraDVS: Path-based method • The voltage and clock speed are determined based on a predicted reference execution path, such as WCEP. • When the actual execution deviates from the predicted reference execution path, the clock speed is adjusted. • Program locations for possible speed scaling are identified using static program analysis or execution time profiling.

  7. IntraDVS: Stochastic method • Start the execution at a low speed and accelerate the execution later. • If the probability density function of execution times of a task is known a prior, the optimal speed schedule can be computed. • Unlike the path-based IntraDVS, the stochastic IntraDVS may not utilize all the potential slack times.

  8. Inter-task DVS algorithm design factors • InterDVS algorithms exploit the “run-calculate-assign-run” strategy • Run a current task • Calculate the maximum allowable execution time for the next task • Assign the supply voltage for the next task • Run the next task • A generic InterDVS algorithms consists of two parts: • Slack estimation • Slack distribution • Slack times come from two sources • Static slack times • Dynamic slack times

  9. InterDVS: Static slack estimation • Maximum constant speed • Worst case processor utilization (WCPU) • If the WCPU U of a given task set is lower than 1.0 under the maximum speed fmax, the task set can be scheduled with a new maximum speed f’max = U * fmax.

  10. InterDVS: Dynamic slack estimation (I) • Stretching to NTA • NTA: the arrival time of the next task. • Assume that the current task t is scheduled at time T. • If NTA of t is later than (T+WCET(t)), task t can be executed at a lower speed so that its execution completes exactly at the NTA.

  11. Figure 1Examples of Stretching-to-NTA

  12. InterDVS: Dynamic slack estimation (II) • Priority-based slack stealing • When a higher-priority task completes its execution earlier than its WCET, the following lower-priority tasks can use the slack time. • Utilization updating • Recalculate the expected WCPU using the actual execution time of completed task instances. • The main merit of the method is its simple implementation.

  13. InterDVS: Slack distribution methods • Most InterDVS algorithms have adopted a greedy approach, where all the slack times are given to the next activated task.

  14. Table 1.Classification of DVS techniques

  15. Target algorithms

  16. Simulation environment (I) • SimDVS was designed to achieve the following goals: • Support both IntraDVS and InterDVS algorithms • Integrate different DVS algorithms easily • Support different task workloads, variations in execution path taken, and different task set configurations easily • Support different variable-voltage processors easily

  17. Simulation environment (II)

  18. Experimental results (I) • Performance evaluation of InterDVS algorithms • Number of tasks in a task set • Worst case processor utilization of task set • Machine specification • Speed bound

  19. Figure 3Impact of the number of tasks • lppsEDF, lppsRM, and ccRM that only use the stretching-to-NTA technique do not significantly improve.

  20. Figure 4Impact of WCPU and the number of scaling levels • The energy consumption increases as a linear function of WCPU of a task set. • The energy consumption increases as the number of scaling levels decreases.

  21. Figure 5Impact of speed bound • The aggressive InterDVS algorithms, the energy efficiency is highest when the speed bound factor was set to ACPU.

  22. Figure 6Impact of speed bound • ACPU = 0.55 X WCPU • There is a substantial room for improvement in developing more energy-efficient RM InterDVS algorithms.

  23. Experimental results (II) • Performance evaluation of Intra-Task DVS algorithms • Path-based Method - intraShin • Stochastic Method – intraGruian • Slack ratio is defined as the ratio of WCET to the assigned execution time • Figure 7 shows the relative energy consumption ratio of intraGruian over intraShin.

  24. Figure 7Energy consumption ration of intraShin and intraGruian • intraShin works better than intraGruian when the distribution of actual execution times is significantly different from the assumed distribution.

  25. Experimental results (III) • Performance evaluation of hybrid methods • H1 and H3 are close to the pure InterDVS approach • H2 is close to the pure IntraDVS approach

  26. Figure 8Energy efficiency of HybridDVS algorithms • HybridDVS algorithms are shown to reduce the energy consumption by 5~20% over that of the pure DVS algorithms.

  27. Conclusions • Existing EDF InterDVS algorithms such as AGR, laEDF and lpSHE are close to optimal. Their power consumption is only 9~12% worse than the theoretical lower bound. • RM InterDVS algorithms has a significant gap from the theoretical lower bound. • HybridDVS algorithm can be better than a pure IntraDVS algorithm or a pure InterDVS algorithm.

More Related