1 / 15

Resource augmentation and on-line scheduling on multiprocessors

Resource augmentation and on-line scheduling on multiprocessors. Phillips, Stein, Torng, and Wein. Optimal time-critical scheduling via resource augmentation . STOC (1997) . Algorithmica (to appear). Background: on-line algorithms.

solana
Download Presentation

Resource augmentation and on-line scheduling on multiprocessors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Resource augmentation and on-line scheduling on multiprocessors Phillips, Stein, Torng, and Wein. Optimal time-critical scheduling via resource augmentation. STOC (1997). Algorithmica (to appear).

  2. Background: on-line algorithms • Optimization problems: given problem instance I, algorithm A obtains a value valA(I) -- goal is to maximize this value • On-line algorithmsvs an optimal off-line/ clairvoyant algorithm (OPT) • Competitive ratio of on-line algorithm A: min all I ( valA(I)/ valOPT(I) ) • Goal: Design an on-line algorithm with largest competitive ratio

  3. Background: hard-real-time scheduling • The on-line problem: • Instance I = {J1, J2, ..., Jn} of jobs • Each job Jj = (rj, pj, dj) • arrives at instant ri • needs to execute for pi units... • by a deadline at instant di • Job Ji is revealed at instant ri • Difficult to formulate as an optimization problem -- all deadlines must be met! • In uniprocessor systems, we dodged this issue • EDF/ LL are optimal algorithms (always meet all deadlines) • EDF/ LL are on-line algorithms... • ... with competitive ratio one

  4. Hard-real-time scheduling: multiprocessors • No optimal (in the EDF/LL sense) on-line algorithm exists • Must still meet all deadlines...So, give the on-line algorithm extra resources (more/ faster processors) • This paper asks: how much extra resources do EDF/ LL need, in order to meet all deadlines for sets of jobs known to be feasible on m processors? • The answers: • EDF/ LL meet all deadlines if processors are (2 - 1/m) times as fast • No on-line algorithm can meet all deadlines if processors are < 1.2 times as fast • EDF cannot always meet all deadlines if processors are (2 - 1/m - ) times as fast, for any  > 0

  5. Why we care • Our (RTS) task systems: • usually pre-specified (e.g., periodic tasks/ sporadic tasks) • “on-line”ness usually not an issue • exception: overload scheduling (later) • We’ll do feasibility analysis (does a schedule exist?) • If feasible, we’ll use the results in this paper • choose an algorithm (usually, EDF) • overallocate resources as mandated by these results • sleep well, knowing that the system performs as expected • Why choose feasibility analysis (versus schedulability analysis with chosen algorithm)? • provably competitive performance translates to approximation guarantees

  6. Model and definitions Instance I = {J1, J2, ..., Jn} of jobs Each job Jj = (rj, pj, dj) • arrives at instant ri • needs to execute for pi units... • by a deadline at instant di If I is feasible on m processors, an s-speed on-line algorithm will meet all deadlines on m processors each s times as fast (Thus, EDF is a (2 - 1/m)-speed algorithm)

  7. Digression: An example of how we’d use these results

  8. Scheduling periodic tasks - taxonomy Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory bin-packing + LL (no advantage) bin-packing + EDF Baker/ Oh (RTS98) Andersson/ Jonsson Pfair scheduling Periodic task system  = {1, 2,..., n}; i = (Ti, Ci), RM EDF LL/ Pfair

  9. Remember this? (last class) RM-US(1/4) • all tasks i with (Ti/ Ci > 1/4) have highest priorities • for the remaining tasks, rate-monotonic priorities Lemma: Any task system satisfying [ (SUM j :  j : Ci /Ti)  m/4] and [ (ALL j :  j : Ci /Ti)  1/4] is successfully scheduled using RM-US(1/4) Theorem: Any task system satisfying [ (SUM j :  j : Ci /Ti)  m/4] is successfully scheduled using RM-US(1/4)

  10. A new (job-level static priority) scheduling algorithm EDF-US(1/2): • If Ci/Ti  0.5, then jobs of i get EDF priority • If Ci/Ti > 0.5, then jobs of i get highest priority • (EDF implementation: set deadline to -) Lemma: Any task system satisfying [ (SUM j :  j : Ci /Ti)  m/2] and [ (ALL j :  j : Ci /Ti)  1/2] is successfully scheduled using EDF-US(1/2) Theorem: Any task system satisfying [ (SUM j :  j : Ci /Ti)  m/2] is successfully scheduled using EDF-US(1/2)

  11. Scheduling periodic tasks w/ migration Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory bin-packing + LL (no advantage) bin-packing + EDF Baker/ Oh (RTS98) Andersson/ Jonsson Pfair scheduling RM-US(1/4) EDF-US(1/4) Pfair 25% 50% 100%

  12. Back to the results in this paper...(faster processors)

  13. The big insight Definitions: • A(j,t) denotes amount of execution of job j by Algorithm A until time t • A(I,t) = [SUM: j I: A(j,t)] The crucial question: Let A be any “busy” (work-conserving) scheduling algorithm executing on m processors of speed   1. What is the smallest  such that at all times t, A(I, t)  A’(I,t) for any other algorithm A’ executing on m speed-1 processors? Lemma 2.6:  turns out to be (2 - 1/m) Use Lemma 2.6, and an individual algorithm’s scheduling rules, to draw conclusions regarding these algorithms

  14. The oh-so-important lemma 2.6 Proof: by contradiction Suppose there are time instants at which this is not true Let  = { i |  t  A(I,t) < A’(I,t) and A(i,t) < A’(i,t) } Let j be the job with the earliest release time rj in  Let to be the earliest time instant at which A(I,to) < A’(I,to) Eq (1) A(j,to) < A’(j,to) Eq (2) Lemma: Let I be an input instance, t  0 any time-instant. For any busy algorithm A using (2-1/m)-speed machines, A(I,t)  A’(I, t) for any algorithm A’ using 1-speed machines.

  15. EDF is a (2 - 1/m)-speed algorithm Instance I = {J1, J2, ..., Jn}; job Jj = (rj, pj, dj) is feasible on m procs Wlog, assume that di di+1 for all i Let Ik = {J1, J2, ..., Jk} Proof: Induction on k Base: EDF on m (1 - 2/m)-speed procs meets all deadlines for I1, .., Im IH: EDF on m (1 - 2/m)-speed procs meets all deadlines for I1, .., Ik We’re considering Ik+1. • Let Qk+1 Ik+1 denote the jobs in Ik+1 with deadlines at dk+1 • (Ik+1 \ Qk+1) is Iq for some q  k • By IH, EDF on m (1 - 2/m)-speed procs meets all deadlines for Iq • BY definition of EDF, EDF(Ik+1) is identical to EDF(Iq) on jobs of Iq; -- thus, all deadlines in Iq are met in EDF(Ik+1) • By Lemma 2.6, EDF(Ik+1,dk+1)  OPT(Ik+1, dk+1) • Since OPT meets all deadlines at dk+1, so must EDF on m (1 - 2/m)-speed procs

More Related