Introductory seminar on research cis5935 fall 2008
Download
1 / 50

Introductory Seminar on Research CIS5935 Fall 2008 - PowerPoint PPT Presentation


  • 109 Views
  • Uploaded on

Introductory Seminar on Research CIS5935 Fall 2008. Ted Baker. Outline. Introduction to myself My past research My current research areas Technical talk: on RT MP EDF Scheduling The problem The new results The basis for the analysis Why a better result might be possible.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Introductory Seminar on Research CIS5935 Fall 2008' - irina


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Introductory seminar on research cis5935 fall 2008

Introductory Seminar on ResearchCIS5935 Fall 2008

Ted Baker


Outline
Outline

  • Introduction to myself

    • My past research

    • My current research areas

  • Technical talk: on RT MP EDF Scheduling

    • The problem

    • The new results

    • The basis for the analysis

    • Why a better result might be possible


Past research
Past Research

  • Relative computability

    • Relativizations of the P=NP? question (1975-1979)

  • Algorithms

    • N-dim pattern matching (1978)

    • extended LR parsing (1981)

  • Compilers & PL implementation

    • Ada compiler and runtime systems (1979-1998)

  • Real-time runtime systems, multi-threading

    • FSU Pthreads & other RT OS projects (1985-1998)

  • Real-time scheduling & synch.

    • Stack Resource Protocol (1991)

    • Deadline Sporadic Server (1995)

  • RT Software standards

    • POSIX, Ada (1987-1999)


  • Recent current research
    Recent/Current Research

    • Multiprocessor real-time scheduling (1998-…)

      • how to guarantee deadlines for task systems scheduled on multiprocessors?

        with M. Cirinei & M. Bertogna (Pisa), N. Fisher & S. Baruah (UNC)

    • Real-time device drivers (2006-…)

      • how to support schedulability analysis with an operating system?

      • how to get predictable I/O response times?

        with A. Wang & Mark Stanovich (FSU)


    A real time scheduling problem
    A Real-Time Scheduling Problem

    Will a set of independent sporadic tasks miss any deadlines if scheduled using a global preemptive Earliest-Deadline-First (EDF) policy on a set of identical multiprocessors?


    Background terminology
    Background & Terminology

    • job = schedulable unit of computation, with

      • arrival time

      • worst-case execution time (WCET)

      • deadline

    • task = sequence of jobs

    • task system = set of tasks

    • independent tasks:

      can be scheduled without consideration of interactions, precedence, coordination, etc.


    Sporadic task i
    Sporadic Task i

    • Ti = minimum inter-arrival time

    • Ci = worst-case execution time

    • Di = relative deadline

    job completes

    deadline

    job released

    next release

    scheduling window


    Multiprocessor scheduling
    Multiprocessor Scheduling

    • m identical processors (vs. uniform/hetero.)

    • shared memory (vs. distributed)

    • preemptive (vs. non-preemptive)

    • on-line (vs. off-line)

    • EDF

      • earlier deadline  higher priority

    • global (vs. partitioned)

      • single queue

      • tasks can migrate between processors


    Questions
    Questions

    • Is a given system schedulable by global-EDF?

    • How good is global-EDF at finding a schedule?

      • How does it compare to optimal?


    Schedulability testing
    Schedulability Testing

    Global-EDF schedulability for sporadic task systems can be decided by brute-force state-space enumeration (in exponential time) [Baker, OPODIS 2007]

    but we don’t have any practical algorithm.

    We do have several practical sufficient conditions.


    Sufficient conditions for global edf
    Sufficient Conditions for Global EDF

    • Varying degrees of complexity and accuracy

    • Examples:

      • Goossens, Funk, Baruah: density test (2003)

      • Baker: analysis of -busy interval (2003)

      • Bertogna, Cirinei: iterative slack time estimation (2007)

  • Difficult to compare quality, except by experimentation

  • All tests are very conservative


  • Density test for global edf
    Density Test for Global EDF

    Sporadic task system  is schedulable

    on m unit-capacity processors if

    where


    A more precise load metric
    A more precise load metric

    maximum demand of jobs of i that arrive in and have deadlines within

    any interval of length t

    maximum fraction of processor demanded by jobs of i that arrive in

    and have deadlines within any time interval


    Rationale for dbf
    Rationale for DBF

    single processor analysis uses maximal busy interval,

    which has no “carried in” jobs.


    Load based test theorem 3
    Load-based test: Theorem 3

    Sporadic task system t is global-EDF schedulable on m unit-capacity processors if

    where


    Optimality
    Optimality

    • There is no optimal on-line global scheduling algorithm for sporadic tasks [Fisher, 2007]

      • global EDF is not optimal

      • so we can’t compare to an optimal on-line algorithm

      • but we can compare it to an optimal clairvoyant scheduler


    Speed up factors used in competitive analysis
    Speed-up Factors, used in Competitive Analysis

    A scheduling algorithm has a processor speedup factor f ≥ 1 if

    for any task system  that is feasible on a given multiprocessor platform

    the algorithm schedules  to meet all deadlines on a platform in which each processor is faster by a factor f.


    Edf job scheduling speedup
    EDF Job Scheduling Speedup

    Any set of independent jobs that can be scheduled to meet all deadlines on m unit-speed processors will meet all deadlines if scheduled using Global EDF on m processors of speed 2 - 1/m.

    [Phillips et al., 1997]

    But how do we tell whether a sporadic task system is feasible?


    Sporadic edf speed up
    Sporadic EDF Speed-up

    If t is feasible on m processors of speed x then it will be correctly identified as global-EDF schedulable on m unit-capacity processors by Theorem 3 if


    Corollary 2
    Corollary 2

    The processor speedup bound for the global-EDF schedulability test of Theorem 3 is bounded above by


    Interpretation
    Interpretation

    The processor speed-up of

    compensates for both

    • non-optimality of global EDF

    • pessimism of our schedulability test

      There is no penalty for allowing post-period deadlines in the analysis (Makes sense, but not borne out by prior analyses, e.g., of partitioned EDF)


    Steps of analysis
    Steps of Analysis

    • lower bound m on load to miss deadline

    • lower bound on length of m-busy window

    • downward closure of m-busy window

    • upper bound on carried-in work per task

    • upper bound on per-task contribution to load, in terms of DBF

    • upper bound on DBF, in terms of density

    • upper bound on number of tasks with carry-in

    • sufficient condition for schedulability

    • derivation of speed-up result


    problem job arrives

    first misseddeadline

    other jobs execute

    problem job executes

    Consider the first “problem job”, that misses its deadline.

    What must be true for this to happen?


    Details of the first step

    Details of the First Step

    What is a lower bound on the load needed to miss a deadline?


    problem job arrives

    first missed deadline

    previous job

    of problem task

    problem job ready

    The problem job is not ready to execute until the preceding job

    of the same task completes.


    first missed deadline

    problem window

    previous job

    of problem task

    problem job ready

    Restrict consideration to the “problem window”

    during which the problem job is eligible to execute.


    problem window

    other tasks execute

    problem task executes

    The ability of the problem job to complete within the problem window

    depends on its own execution time and interference from jobs of other tasks.


    problem window

    carried-in jobs

    deadline > td

    • The interfering jobs are of two kinds:

    • local jobs: arrive in the window and have deadlines in the window

    • carried-in jobs: arrive before the window and have deadlines in the window


    other tasks interfere

    problem task executes

    Interference only occurs when all processors are busy executing

    jobs of other tasks.


    other tasks interfere

    problem task executes

    Therefore, we can get a lower bound on the necessary interfering

    demand by considering only “blocks” of interference.


    other tasks interfere

    problem task executes

    The total amount of block interference is not affected by

    where it occurs within the window.


    other tasks interfere

    problem task executes

    The total demand with deadline  td includes the problem

    problem job and the interference.

    processors busy executing jobs

    with deadline  problem job


    approximation of interference (blocks)

    by demand (formless)

    average

    competing workload

    in [ta,td)

    processors busy executing other jobs

    with deadline  problem job

    From this, we can find the average workload with deadline  td

    that is needed to cause a missed deadline.


    previous deadline of problem task

    problem job arrives

    previous job

    of problem task

    The minimum inter-arrival time and the deadline give us

    a lower bound on the length of the problem window.


    The WCET of the problem job and the number of processors

    allow us to find a lower bound on the average competing workload.


    What we have shown
    What we have shown

    There can be no missed deadline unless there is a

    “-busy” problem window.


    The rest of the analysis
    The Rest of the Analysis

    • [lower bound m on load to miss deadline]

    • lower bound on length of m-busy window

    • downward closure of m-busy window

    • upper bound on carried-in work per task

    • upper bound on per-task contribution to load, in terms of DBF

    • upper bound on DBF, in terms of density

    • upper bound on number of tasks with carry-in

    • sufficient condition for schedulability

    • derivation of speed-up result


    Key elements of the rest of the analysis
    Key Elements of the Rest of the Analysis

    # tasks with carried-in jobs  m-1

    shows carried-in load  max

    Observe length of -busy interval ≥ min(Dk,Tk)

    covers case Dk>Tk

    • Derive speed-up bounds


    previous deadline of problem task

    problem job arrives

    previous job

    of problem task

    Observe length of -busy interval ≥ min(Dk,Tk)

    This covers both case Dk≤TkandDk>Tk


    To minimize the contributions of carried-in jobs, we can extend

    the problem window downward until the competing load falls below .

    maximal -busy interval


    maximal extend-busy interval

    at most

    carried-in jobs

    Observe # tasks with carried-in jobs  m-1

    Use this to show carried-in load  max


    Summary
    Summary extend

    • New speed-up bound for global EDF on sporadic tasks with arbitrary deadlines

    • Based on bounding number of tasks with carried-in jobs

    • Tighter analysis may be possible in future work


    Where analysis might be tighter
    Where analysis might be tighter extend

    • approximation of interference (blocks) by demand (formless)

    • bounding i by max

      (only considering one value of )

    • bounding DBF(i, i +) by (i +)max(t)

    • double-counting work of carry-in tasks


    bounding extendDBF(i, i +) by (i +)max(t)

    contribution of i


    double-counting internal load from tasks with carried-in jobs

    carry-in cases

    non-carry-in cases


    Some other fundamental questions
    Some Other Fundamental Questions jobs

    • Is the underlying MP model realistic?

    • Can reasonably accurate WCET’s be found for MP systems? (How do we deal with memory and L2 cache interference effects?)

    • What is the preemption cost?

    • What is the task migration cost?

    • What is the best way to implement it?


    The end

    The End jobs

    questions?


    maximal jobs-busy interval

    at most

    carried-in jobs


    maximal jobs-busy interval


    maximal jobs-busy interval


    ad