1 / 27

Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms

Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms. Chenyang Lu, John A. Stankovic , Gang Tao, Sang H. Son Presented by Josh Carl. Overview. Motivation and Introduction Architecture Performance Specification and Metrics Control Theory Based Design Methodology

tana
Download Presentation

Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms Chenyang Lu, John A. Stankovic, Gang Tao, Sang H. Son Presented by Josh Carl

  2. Overview • Motivation and Introduction • Architecture • Performance Specification and Metrics • Control Theory Based Design Methodology • Modeling the Controlled Real-Time System • Design of FCS Algorithms • Experiments

  3. Motivation and Intro • Static vs. Dynamic Scheduling • Knowledge of task set and time constraints • Example: Rate Monotonic (RM) • Dynamic: Resource Sufficient/insufficient • Example: Earliest Deadline First (EDF), Admission-Control-based • RM, EDF are Open Loop • Good in predictable environments with accurate models • Bad in unpredictable environments

  4. Soft Real-Time Systems • New soft real-time applications • Open and unpredictable environments • Examples: Online trading, e-commerce, agile manufacturing • Resource requirements and arrival rates not known, but the system still has performance guarantees. • EDF and RM fail miserably in these applications

  5. Enter Feedback Control RT Scheduling • Feedback Control Systems • Scheduling Framework • Architecture • Performance specifications • Design methodology • “In contrast to ad hoc approaches that rely on laborious design/tuning/testing iterations, FCS enables system designers to systematically design adaptive real-time systems with established analytical methods to achieve desired performance guarantees in unpredictable environments.”

  6. Task Model • QoS Levels: • Each QoS level j (0 ≤ j ≤ N-1), higher QoS=more CPU and more Value • Di[j]: the relative deadline • EEi[j]: the estimated execution time • AEi[j]: the (actual) execution time • Vi[j]: the value task Ticontributes if it is completed at QoS level j before its deadline Di[j]. • For periodic tasks: • Pi[j]: the invocation period • Bi[j]: the estimated CPU utilization Bi[j] = EEi[j] / Pi[j] • Ai[j]: the (actual) CPU utilization Ai[j] = AEi[j] / Pi[j] • For aperiodic tasks: • EIi[j]: the estimated inter-arrival-time between subsequent invocations • AIi[j]: the average inter-arrival-time that is unknown to the scheduler • Bi[j]: the estimated CPU utilization Bi[j] = EEi[j]/Eii[j] • Ai[j]: the (actual) CPU utilization Ai[j] = AEi[j] / AIi[j] • “A key feature of our task model is that it characterizes systems in unpredictable environments where task’s actual CPU utilization is time varying and unknown to the scheduler.”

  7. Control Variables • Controlled Variables • Performance metrics controlled by the scheduler. • Defined over window ( (k-1)W, kW), W=sampling period, k=sampling instant • Miss Ratio, M(k)=deadline misses/completed & aborted tasks • Utilization, U(k)=% of CPU busy time in window • Value, V(k) • Performance References: Msand Us • Manipulated Variables: What can be changed by the scheduler • Total estimated utilization B(k)=ΣiUi[li(k)], li=QoS level • U(k) and B(k) are different values (actual vs. estimated), and U(k) bounded to 100%, B(k) is not.

  8. FCS Architecture

  9. Control Loop • Monitor: Measures controlled variables (M(k) and/or U(k)) and sends data to the controller. • Controller: Compares actual data to estimated data and changes control input accordingly (DB(k)). • QoS Actuator: Changes total estimated requested utilization at each sampling instant k according to the control input by adjusting QoS levels. • Basic Scheduler: EDF or Rate/Deadline Monotonic.

  10. Performance • Regular metrics (average miss ratio and average utilization) don’t work. • Stability: “Miss ratio M(k) and utilization U(k) are always bounded for bounded references” – don’t want to stay at 100%. • Transient-state response: • Overshoot: Mo=(Mmax-MS)/MS, Uo=(Umax-US)/US • Settling Time = TS • Steady-state Error = ESM or ESU= Difference between average values in steady state and its corresponding reference. • Sensitivity = SP = Robustness of the system with regard to workload or system variations. • Loads: Step-load, Ramp Load

  11. Design Methodology • “A system designer can systematically design an adaptive resource scheduler to satisfy the system’s performance specifications with established analytical methods in control theory.” • 1. Specify the desired dynamic behavior with transient and steady state performance metrics. • 2. Establish a dynamic model of the real-time system for the purpose of performance control. • 3. Based on 1 and 2 apply established mathematical techniques of feedback control theory to design FCS algorithms that analytically guarantee the specified behavior.

  12. Modeling • Utilization • Misses • GA=worst case utilization ratio, GM=worst case miss ratio, DB=change in total estimated requested utilization, A(k)=total (actual) requested utilization, Ath(k)=Utilization Threshold • “Property 1: At any instant of time, at least one of the controlled variables (U(k) and M(k)) does not saturate in a real-time system.”

  13. Design (abbreviated) • At each sampling instant k, the Controller computes a control input DB(k), the change in total estimated requested utilization based on an error ratio. • Error ratios (E(k)): • EM(k)=Ms-M(k) • EU(k)=Us-U(k) • Control Input: DB(k)=KPE(k), KP=tunable parameter. • Controller Goal: (1) guaranteed stability, (2) zero steady state error, (3) zero sensitivity to workload variations, and (4) satisfactory settling time and overshoot.

  14. Derivations and z-transforms later… • “In summary, given the system parameters, the worst-case utilization ratio GA, and the miss ratio factor GM, we can directly derive the control parameter KP…to guarantee a set of performance profiles including stability, zero steady state error, and a satisfactory range of transient performance.”

  15. Three Algorithms • FC-U: Feedback Utilization Control • Periodically samples the utilization, computes a change in total estimated utilization, assigns new QoS levels. • “FC-U guarantees that the miss ratio M(k)=0 in steady state if its reference Us ≤ Ath.” • Achieves excellent performance (M(k)=0) in steady state if utilization reference is correct. • FC-M: Feedback Miss Ratio Control • Utilizes a miss ratio control loop to directly control miss ratio. • Does not depend on any knowledge about the utilization bound. • It can always achieve low miss ratio, therefore is more robust in the face of utilization threshold variations. • FC-UM: Integrated Utilization/Miss Ratio Control • Best of both worlds but more complicated. • Uses the most conservative of the control inputs.

  16. Experiments • On a simulator called FECSIM. • Two task sets: • Some built in randomness in the tasks. Tasks have 3 QoS levels. • QoS Actuator: Highest-Value-Density-First • Sampling window is 0.5 sec.

  17. Profiling • Run simulator in open-loop.

  18. Performance Reference Settings and Experiment A • System Settings • Experiment A: Arrival Overload • SL(0,150%) • Ga’=2 (execution time factor) – average execution time is twice the estimation.

  19. Experiment A: FC-U Results

  20. Experiment A: FC-M

  21. Experiment A: FC-UM

  22. Experiment A: Open Loop

  23. Experiment B: Arrival/Internal Overload • Same as experiment A, plus the average execution times vary every 100 seconds. • First and second change: 57.5% increase for every task. • Last change: 75% decrease for every task. Underload situation. • Shortened settling time by manually setting B(0)=80% (estimated requested utilization).

  24. Experiment B: FC-UM

  25. Experiment B: Open Loop

  26. Final Metrics

  27. Conclusions • FCS algorithms can provide: • Stability with arrival overload and internal overload. • System miss ratio and utilization stay close to the corresponding performance reference. • Satisfactory settling time and low overshoot in transient state.

More Related