1 / 67

Real-Time Systems, COSC-4301-01, Lecture 2

Real-Time Systems, COSC-4301-01, Lecture 2. Stefan Andrei. Reminder of last lecture. Introduction in Real-Time Systems: Chapter 1 of [Cheng; 2002]; Chapter 1 of [Kopetz; 1997]; Chapter 1 of [Stankovic, Spuri, Ramamritham, Buttazzo; 1998]). 1- 2. Overview of this lecture.

brynne-vang
Download Presentation

Real-Time Systems, COSC-4301-01, Lecture 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time Systems, COSC-4301-01, Lecture 2 Stefan Andrei COSC-4301-01, Lecture 2

  2. Reminder of last lecture Introduction in Real-Time Systems: Chapter 1 of [Cheng; 2002]; Chapter 1 of [Kopetz; 1997]; Chapter 1 of [Stankovic, Spuri, Ramamritham, Buttazzo; 1998]) COSC-4301-01, Lecture 2 1-2

  3. Overview of this lecture • Real-Time Scheduling and Schedulability Analysis • Schedulability test • Schedulability utilization • Optimal scheduler • Determining computation time • Uniprocessor scheduling • Scheduling preemptable and independent tasks • Fixed-priority schedulers: RM, DM • Dynamic-priority schedulers: EDF, LL COSC-4301-01, Lecture 2

  4. Scheduling • General definition (uniprocessor, multiprocessor, distributed system): • Given a set of (computer) tasks (a.k.a., processes), scheduling is to determine when to execute which task, thus determining the execution order of these tasks. • Example: T1=‘fetch children to school’, T2=‘do classes’, T3=‘do shopping’ by the same person (uniprocessor). • For the multiprocessors and distributed systems, scheduling means also to determine an assignment of these tasks to a specific person in a team of people. • Example: T1 and T3 are done by processor P1 (e.g., spouse), and T2 is done by processor P2 (multiprocessor). COSC-4301-01, Lecture 2

  5. Scheduling (cont) • Is a central activity of a computer system, usually performed by the operating system. • Is also necessary in many non-computer systems (e.g., assembly lines). • In non-real-time systems, the typical goal of scheduling is to maximize average throughput (number of tasks completed per unit time) and/or to minimize average waiting time of the tasks. • In the case of real-time scheduling, the goal is to meet the deadline of every task by ensuring each task can complete execution by its specified deadline. • The deadline is obtained from the environmental constraints imposed by the application. COSC-4301-01, Lecture 2

  6. Scheduling analysis • Is to determine whether a specific task of a set of tasks satisfying certain constraints can be successfully scheduled (completing execution of every task by its specified deadline) using a specific scheduler. • Schedulability test: validate whether a given application can satisfy its specified deadlines when scheduled according to a specific scheduling algorithm. COSC-4301-01, Lecture 2

  7. Schedulability test • Is often done at compile time, before the computer system and its tasks start their execution. • If the test can be performed efficiently, then it can be done at run-time as an on-line test. • Types of schedulers: • Compile-time (a.k.a., static) schedulers • Run-time (a.k.a., on-line or dynamic) schedulers. • Schedulable utilization: is the maximum utilization allowed for a set of tasks that will guarantee a feasible scheduling of this task set. COSC-4301-01, Lecture 2

  8. Hard and soft real-time systems • Hard real-time system: • Requires that every task or task instance completes its execution by its specified deadline; • Failure to do so even for a single task or task instance may lead to catastrophic consequences. • Soft real-time system: • Allows some tasks or task instances to miss their deadlines, but a task or task instance that misses its deadline may be less useful or valuable to the system. COSC-4301-01, Lecture 2

  9. Optimal scheduler • Is one which may fail to meet the deadline of a task only if no other scheduler can meet its deadline. • Examples include uniprocessor earliest-deadline-first (EDF) and least-laxity-first (LLF) schedulers for independent tasks with no synchronization constraints and no resource requirements. COSC-4301-01, Lecture 2

  10. Real-time scheduling. Task’s characterization • Characterization of a task: • c: computation time (a.k.a., WCET – Worst Case Execution Time) • S: starttime (a.k.a., release or ready time) • d: deadline (i.e., relative to the start time) • p: period (a.k.a., minimum separation) • Hence, a task T can be denoted as (S, c, d, p) • Some characterizations include also: • D: absolute deadline (a.k.a., wall clock time deadline). Usually D = S + d. • Thus, a task T can be denoted as (S, c, d, D, p) COSC-4301-01, Lecture 2

  11. Types of tasks • Single-instance task: executes only once. • Periodic task: has many instances or iterations, and there is a fixed period between two consecutive releases of the same task. • Example: a periodic task may perform signal processing of a radar scan once every 2 seconds, so the period of this task is 2 seconds. COSC-4301-01, Lecture 2

  12. Types of tasks (cont) • Sporadic task: has zero or more instances, and there is a minimum separation between two consecutive releases of the same task. • Example: a sporadic task may perform emergency maneuvers of an airplane when the emergency button is pressed, but there is a minimum separation of 20 seconds between two emergency requests. • An aperiodic task is a sporadic task with either a soft deadline or no deadline. COSC-4301-01, Lecture 2

  13. Determining S, d, p, and c • The application and the environment in which the application is embedded are main factors determining S, d, and p. • c, the computation time of a task, is dependent on its source code, object code, execution architecture, memory management policies, and actual number of page faults and I/O. • c is not just an upper bound on the execution time of task code without interruption, but it has to include the time of the central processing unit (CPU) for handling page faults, I/O requests, etc. COSC-4301-01, Lecture 2

  14. Determining c • Is crucial to successfully scheduling it in a real-time system. • An overly pessimistic estimation of c will result in wasted CPU time, whereas an under-approximation would result in missed deadlines. • First solution: testing the tasks and use the largest value of c during these tests. • Disadvantage: the largest value seen during testing may not be the largest observed in the working system. COSC-4301-01, Lecture 2

  15. Determining c • Second solution: analyzing the source code (an advantage is that these methods are safe). • Disadvantage: using an overly simplified model of the CPU may result in over-approximating c. • Third solution: running the tasks in systems with several levels of memory components (e.g., cache, main memory). • Disadvantage: there are restrictions in their models and thus the proposed analysis techniques cannot be applied in systems not satisfying their constraints. COSC-4301-01, Lecture 2

  16. Determining c • Fourth solution: use a probability approach to model WCET of a process. • Main idea: model the distribution of c and use it to compute a confidence level for any given c. • Advantage: in soft real-time systems, if the designer wants a confidence of 99% on the estimation for WCET, he/she can determine which WCET to use from the probability approach. • Disadvantage: is not recommended in hard real-time systems as WCET is just an approximation. COSC-4301-01, Lecture 2

  17. Scheduling • Uniprocessor scheduling • Multiprocessor scheduling COSC-4301-01, Lecture 2

  18. Uniprocessor scheduling. Basic Assumptions • Compile-time/static (i.e., priority of tasks do not change during run-time); • Preemptive tasks (i.e., tasks can be interrupted and resumed later); • No precedence constraints (i.e., all are independent); • Negligible context-switching time; • Periodic tasks. COSC-4301-01, Lecture 2

  19. Rate-Monotonic (RM) scheduling strategy • Is a fixed and static priority scheduling strategy (Rate-Monotonic Scheduler - RMS). • The task’s priority is task’s fixed period. • RMS executes at any time instant the instance of the ready task with the shortest period first. • More formal, task Ji has a higher priority than task Jk if and only if pi < pk. COSC-4301-01, Lecture 2

  20. RM-scheduling strategy. Example • J1: S1 = 0, c1 = 2, p1 = d1 = 5 • J2: S2 = 1, c2 = 1, p2 = d2 = 4 • J3: S3 = 2, c3 = 2, p3 = d3 = 20 • Figure 3.1 from [Cheng; 2005], page 45 COSC-4301-01, Lecture 2

  21. RM-scheduling strategy. Considerations • U = c1/p1+ … + cn/pn • If a task set is RM-schedulable, then U ≤ 1. • So, if U > 1, then the task set is not RM-schedulable. • In general, the RM-scheduling strategy is not optimal because there exist schedulable task sets that are not RM-schedulable. • There is a special class of periodic tasks sets for which the RM-scheduling is optimal: their periods pi are multiples of each other. COSC-4301-01, Lecture 2

  22. RM-schedulability test 1 • Given a set of n independent, preemptable, and periodic tasks on a uniprocessor such that di≥ pi and pi are multiples of each other, let U be the total utilization of this tasks set (U = c1/p1+ … + cn/pn). • The task set is RM-schedulable if and only if U ≤ 1. COSC-4301-01, Lecture 2

  23. RM-schedulability test 1. Example • J1: S1 = 0, c1 = 1, d1 = p1 = 4 • J2: S2 = 0, c2 = 1, d2 = p2 = 2 • J3: S3 = 0, c3 = 2, d3 = p3 = 8 • pi are exact multiples of each other (p2 < p1 < p3, p1 = 2p2, p3 = 4p2). • U = c1/p1+ c2/p2 + c3/p3 = 1/4 +1/2 + 2/8 = 1. • By applying RM-Schedulability Test 1, then this tasks set is RM-schedulable. COSC-4301-01, Lecture 2

  24. RM-schedulability test 2 • Liu and Layland’s sufficient condition (1973): • Given a set of n independent, preemptable, and periodic tasks on a uniprocessor, let U be the total utilization of this tasks set. The task set is schedulable if U ≤ n(21/n – 1). • Remarks: • For n=2, n(21/n – 1) equals to 0.8284… • For n=3, n(21/n – 1) equals to 0.7797… • limnn(21/n – 1) = ln 2 ≈ 0.6931… COSC-4301-01, Lecture 2

  25. RM-schedulability test 2. Example • J1: S1 = 0, c1 = 1, p1 = 4 • J2: S2 = 0, c2 = 1, p2 = 5 • J3: S3 = 0, c3 = 3, p3 = 10 • U = c1/p1+ c2/p2 + c3/p3 = 1/4 +1/5 + 3/10 = 0.75 • By applying RM-Schedulability Test 2, this task set is RM-schedulable. COSC-4301-01, Lecture 2

  26. RM-schedulability test 2. Example • J1: S1 = 0, c1 = 1, p1 = 4 • J2: S2 = 0, c2 = 1, p2 = 3 • J3: S3 = 0, c3 = 2, p3 = 6 • U = c1/p1+ c2/p2 + c3/p3 = 1/4 +1/3 + 2/6 = 0.91666… • The RM-Schedulability Test 1 cannot be applied since periods are not exact multiples of each other. • The RM-Schedulability Test 2 cannot be applied since U > 3(20.33-1) ≈ 0.7797. • However, this tasks set is RM-schedulable… COSC-4301-01, Lecture 2

  27. RM-schedulability test 2. Example (cont) • J1: S1 = 0, c1 = 1, p1 = 4 • J2: S2 = 0, c2 = 1, p2 = 3 • J3: S3 = 0, c3 = 2, p3 = 6 J1 J2 J3 0 1 2 3 4 5 6 7 8 9 10 11 12 COSC-4301-01, Lecture 2

  28. RM-schedulability test 3 • Hence, it makes sense to look for stronger analytical conditions to ensure the schedulability without running the task set. • Necessary and sufficient condition requires checking inequalities that depend on time t. • Let J1, …, Ji sorted increasingly according to their period. • Let wi(t)=Σk=1ick t / pk, 0 < t ≤ pi • Task Ji is RM-schedulable if and only if wi(t) ≤ t, for a time instant t, t = k pj, j = 1, …, i, k = 1, …, pi/pj • If di≠ pi, we replace pi by min(di, pi) above. COSC-4301-01, Lecture 2

  29. RM-schedulability test 3. First Example • We sort the jobs according to their period: • J1: S1 = 0, c1 = 1, p1 = 3 • J2: S2 = 0, c2 = 1, p2 = 3 • J3: S3 = 0, c3 = 1, p3 = 3 • U = 1/3 + 1/3 + 1/3 = 1, so Test 2 cannot be applied. • We need to check the following inequalities: • w1(3) ≤ 3, • w2(3) ≤ 3, • w3(3) ≤ 3. COSC-4301-01, Lecture 2

  30. RM-schedulability test 3. First Example • w1(3) = 1 • w2(3) = 2 • w3(3) = 3 • Since i t such that wi(t) ≤ t, it means this tasks set is RM-schedulable according to Schedulability Test 3. COSC-4301-01, Lecture 2

  31. RM-schedulability. Second Example • Let us consider the following preemptable task set T={J1, J2, J3}: J1: S1 = 0, c1 = 1, p1 = d1 = 4 J2: S2 = 1, c2 = 2, p2 = d2 = 6 J3: S3 = 3, c3 = 4, p3 = d3 = 10 • compute the utilization rate. • analyze whether T is feasible using RM-scheduling method. In the affirmative case, provide a schedule. COSC-4301-01, Lecture 2

  32. RM-schedulability. Second Example • The utilization rate: • U = C1/P1 + C2/P2 + C3/P3 = 1/4 + 2/6 + 4/10 = 59/60 = 0.98 • Schedulability Test 1 • For this test U ≤ 1. • But the task periods are not exact multiples of each other, so Schedulability Test 1 cannot be applied. COSC-4301-01, Lecture 2

  33. RM-schedulability. Second Example • Schedulability test 2 • For this test, we check whether the utilization rate (U) is less or equal than n(21/n -1). • Here n = 3, so n(21/n -1) = 3(21/3-1) = 0.7797… • Since U > n(21/n -1), the Schedulability Test 2 cannot be applied (as 0.98 … > 0.7797…). COSC-4301-01, Lecture 2

  34. RM-schedulability. Second Example • For Schedulability test 3, we start with the job J1 that has the smallest deadline. • For J1: • i = 1, j = 1, k = 1 • Thus t = k * pj = 1 * 4 = 4 • w1(t) = c1 [t / p1] = 1 • Now, J1 is RM-schedulable if and only if w1(t) ≤ 4. • Therefore, J1 is RM-schedulable. COSC-4301-01, Lecture 2

  35. RM-schedulability. Second Example • For J2: • i = 2; j  {1, 2}, k = 1. Thus t = k * pj where k = 1. • For j = 1: t = 1 * p1 = 4. • For j = 2: t = 1 * p2 = 6. • Hence, we get the values of t  {4, 6}. • Substituting the respective values of t and j, we get the following conditions (at least one should hold): • c1 + c2 ≤ kp1, that is, 3 ≤ 4 or • 2c1 + c2 ≤ kp2, that is, 4 ≤ 6. • (In fact, both of these conditions hold.) • So, J2 is RM-schedulable together with J1. COSC-4301-01, Lecture 2

  36. RM-schedulability. Second Example • For J3: • i = 3; j  {1, 2, 3}, k  {1, 2}, and t = k * pj. • For k = 1, j  {1, 2, 3}, we have t  {4, 6, 10}. • For k = 2, j  {1, 2, 3}, we have t  {8, 12, 20}. • Hence t  {4, 6, 10, 8, 12, 20}. • Substituting the values of t and j, we get: • c1 + c2 + c3 ≤ 4, that is, 7 ≤ 4, or • 2c1 + c2 + c3 ≤ 6, that is, 8 ≤ 6, or • 2c1 + 2c2 + c3 ≤ 8, that is, 10 ≤ 8, or • 3c1 + 2c2 + c3 ≤ 10, that is, 11 ≤ 10, or • 3c1 + 2c2 + 2c3 ≤ 12, that is, 15 ≤ 12, or • 5c1 + 4c2 + 2c3 ≤ 20, that is, 21 ≤ 20. • Since none of the above condition holds, J3 is not RM-schedulable together with J1, and J2. COSC-4301-01, Lecture 2

  37. J1 J2 J3 misses its deadline J3 43 0 3 7 13 19 23 28 33 37 53 The RM scheduling attempt • It can be seen that J3 misses its deadline at t=53. COSC-4301-01, Lecture 2

  38. Dynamic-priority schedulers • Earliest Deadline First (EDF) • Least Laxity First (LLF) • Reminder: An optimal scheduler is one which may fail to meet a deadline of a task only if no other schedule can meet its deadline. • Both above strategies are optimal. COSC-4301-01, Lecture 2

  39. Earliest Deadline First (EDF or ED) • Executes at every instant the ready task with the earliest absolute deadline first (i.e., D). • Reminder: D = S + d. • If two tasks or more have the same deadlines, EDF randomly selects one for execution next. • EDF is a dynamic-priority scheduler since task priorities may change at run-time depending on the nearness of their absolute deadline. COSC-4301-01, Lecture 2

  40. EDF terminology • [Krishna & Shin, 1997] call EDF a deadline-monotonic (DM) scheduling algorithm; • [Liu, 2000] defines DM algorithm as a fixed-priority scheduler that assigns higher priorities to tasks with shorter relative deadlines. • [Cheng, 2002] and we use EDF or DM to refer to dynamic-priority scheduling algorithm. COSC-4301-01, Lecture 2

  41. Example. FIFO scheduler fails • Four single-instance tasks (for simplicity): • J1: S1 = 0, c1 = 4, D1 = 15 • J2: S2 = 0, c2 = 3, D2 = 12 • J3: S3 = 2, c3 = 5, D3 = 9 • J4: S4 = 5, c4 = 2, D4 = 8 • A FIFO (First-In-First-Out) scheduler (often used in non-real-time operating systems) executes the tasks in the order of their arrivals and their deadlines are not considered. COSC-4301-01, Lecture 2

  42. Example. FIFO scheduler fails • The FIFO (First-In-First-Out) scheduler gives an infeasible schedule (J3 misses its deadline – D3 = 9; also, J4 misses its deadline – D4 = 8). J4 J3 J2 J1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 COSC-4301-01, Lecture 2

  43. Example. EDF scheduler works • At time 0, since D1 > D2, then J2 has a higher priority than J1; • At time 2, since D3 < D2, then J2 is preempted and J3 begins execution; • At time 5, since D4 < D3, then J3 is preempted and J4 begins execution, and so on, so forth. J4 J3 J3 J2 J2 J1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 COSC-4301-01, Lecture 2

  44. EDF algorithm is optimal • Theorem [Dertouzos; 1974]. Given a set S of independent (no resource contention or precedence constraints) and preemptable tasks with arbitrary start times and deadlines on a uniprocessor, the EDF algorithm yields a feasible schedule for S if and only if S has feasible schedules. • [Dertouzos; 1974] M.L. Dertouzos, Control Robotics: the Procedural Control of Physical Processes, Information Processing 74, North-Holland Publishing Company, 1974 COSC-4301-01, Lecture 2

  45. Least-Laxity-First (LL or LLF) • Like EDF, the LL strategy is another run-time optimal scheduler. • It is also known as Minimum-Laxity-First (MLF) or Least-Slack-Time-First (LST) algorithm. • Let c(i) denote the remaining computation time of a task at time i. • Let d(i) denote the deadline of a task relative to the current time i. • The laxity (or slack) of a task at time i is d(i)-c(i). • The laxity is the maximum time a task can delay execution without missing its deadline in the future. • The LL scheduler executes at every instant the ready task with the smallest laxity. COSC-4301-01, Lecture 2

  46. Example: LL scheduling • J1: S1 = 0, c1 = 2, d1 = 6 = p1 • J2: S2 = 0, c2 = 3, d2 = 7 = p2 • J3: S3 = 0, c3 = 2, d3 = 11 = p3 • At i=0: • c1(0)=2, d1(0)=6-i=6, l1(0)=6-2=4 • c2(0)=3, d2(0)=7-i=7, l2(0)=7-3=4 • c3(0)=2, d3(0)=11-i=11, l3(0)=11-2=9 • The LL scheduler chooses the minimum laxity task (say J1). • At i=1: • c1(1)=1, d1(1)=6-i=5, l1(1)=5-1=4 • c2(1)=3, d2(1)=7-i=6, l2(1)=6-3=3 • c3(1)=2, d3(1)=11-i=10, l3(1)=10-2=8 • The LL scheduler chooses J2 since l2(1) is less l1(1) and l3(1). COSC-4301-01, Lecture 2

  47. LL is optimal • [Mok; 1983] The Least Laxity First Algorithm is optimal for preemptable and independent tasks. • However, the LL algorithm has the disadvantage of a potentially very large number of preemptions, and it is no longer optimal if preemption is not allowed [George et al.; 1996]. • [Mok; 1983] A.K. Mok, Fundamental Design Problems of Distributed Systems for the Hard Real-Time Environment, Ph.D. Dissertation, MIT, 1983. • [George et al.; 1996] L. George, N. Rivierre, M. Spuri, Preemptive and Non-Preemptive Real-Time Uni-Processor Scheduling, Rapport de Recherche RR-2966, INRIA, France, 1996. COSC-4301-01, Lecture 2

  48. Schedulability Test 4 • Let ci denote the computation time of task Ji. • For a set of n periodic preemptable tasks such that the relative deadline di of each task is equal to or greater than its respective period pi (di≥ pi), a necessary and sufficient condition for feasible EDF and LL scheduling of this task set on a uniprocessor is: U = c1/p1+ … + cn/pn≤ 1. • Remark: for a task set containing some tasks whose relative deadlines di are less than their periods, no easy schedulability test exists with a necessary and sufficient condition. COSC-4301-01, Lecture 2

  49. Schedulability test 5 • A sufficient condition for feasible EDF and LL scheduling of a set of independent, preemptable, and periodic tasks on a uniprocessor is c1/min(d1,p1)+ … + cn/min(dn,pn) ≤ 1. • Remarks: • The term ci/min(di,pi) is the density of task Ji. • If the deadline and the period of each task are equal (di = pi), then Schedulability Test 5 is the same as Schedulability Test 4. COSC-4301-01, Lecture 2

  50. Example 1 • J1: S1 = 0, c1 = 2, d1 = 6 = p1 • J2: S2 = 0, c2 = 3, d2 = 7 = p2 • J3: S3 = 0, c3 = 2, d3 = 11 = p3 • U = 2/6 + 3/7 + 2/11 = 218/232 < 1 and di = pi for any i. • Hence, this task set is feasible (according to Schedulability Test 4). COSC-4301-01, Lecture 2

More Related