mc 2 mixed criticality on multicore n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
MC 2 : Mixed Criticality on Multicore PowerPoint Presentation
Download Presentation
MC 2 : Mixed Criticality on Multicore

Loading in 2 Seconds...

play fullscreen
1 / 83

MC 2 : Mixed Criticality on Multicore - PowerPoint PPT Presentation


  • 203 Views
  • Uploaded on

MC 2 : Mixed Criticality on Multicore. Jim Anderson University of North Carolina at Chapel Hill. Driving Problem Joint Work with Northrop Grumman Corp. Next-generation UAVs. Pilot functions performed using “AI-type” software.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'MC 2 : Mixed Criticality on Multicore' - cate


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
mc 2 mixed criticality on multicore

MC2: Mixed Criticality on Multicore

Jim Anderson

University of North Carolina at Chapel Hill

driving problem joint work with northrop grumman corp
Driving ProblemJoint Work with Northrop Grumman Corp.
  • Next-generation UAVs.
    • Pilot functions performed using “AI-type” software.
    • Causes certification difficulties.
    • Workload is dynamic and computationally intensive.
    • Suggests the use of multicore.
    • Our focus:What should the OS look like?
      • Particular emphasis: Real-time constraints.
        • Implies that we care about real-time OSs (RTOSs).

Image source: http://www.as.northropgrumman.com/products/nucasx47b/assets/lgm_UCAS_3_0911.jpg

multicore in avionics the good
Multicore in Avionics: The Good

Core 1

Core M

  • Enables computationally- intensive workloads to be supported.
  • Enables SWAP reductions.
    • SWAP = size, weight, and power.
  • Multicore is now the standard platform.

L1

L1

L2

multicore in avionics the bad ugly
Multicore in Avionics: The Bad & Ugly

Core 1

Core M

  • Interactions across cores through shared hardware (caches, buses, etc.) are difficult to predict.
  • Results in very conservative execution-time estimates (and thus wasted processing resources).

One Approach:Accept this fact. Use resulting

slack for less critical computing.

L1

L1

L2

“Multicore processors arebig slack generators.”

what is less critical
What is “Less Critical”?

more

conservative

design

less

conservative

design

  • Consider, for example, the criticality levels of DO-178B:
    • Level A: Catastrophic.
      • Failure may cause a crash.
    • Level B:Hazardous.
      • Failure has a large negative impact on safety or performance…
    • Level C:Major.
      • Failure is significant, but…
    • Level D: Minor.
      • Failure is noticeable, but…
    • Level E:No Effect.
      • Failure has no impact.
starting assumptions
Starting Assumptions
  • Modest core count (e.g., 2-8).
    • Quad-core in avionics would be a tremendous innovation.
starting assumptions1
Starting Assumptions
  • Modest core count (e.g., 2-8).
  • Modest number of criticality levels (e.g., 2-5).
    • 2 may be too constraining
    •  isn’t practically interesting.
    • These levels may not necessarily match DO-178B.
    • Perhaps 3 levels?
      • Safety Critical: Operations required for stable flight.
      • Mission Critical: Responses typically performed by pilots.
      • Background Work: E.g., planning operations.
starting assumptions2
Starting Assumptions
  • Modest core count (e.g., 2-8).
  • Modest number of criticality levels (e.g., 2-5).
  • Statically prioritize criticality levels.
    • Schemes that dynamically mix levels may be problematic in practice.
      • Note: This is done in much theoretical work on mixed criticality scheduling.
    • For example, support for resource sharing and implementation overheads may be concerns.
    • Also, practitioners tend to favor simple resource sharing schemes.
starting assumptions3
Starting Assumptions
  • Modest core count (e.g., 2-8).
  • Modest number of criticality levels (e.g., 2-5).
  • Statically prioritize criticality levels.
outline
Outline
  • Background.
    • Real-time scheduling basics.
    • Real-time multiprocessor scheduling.
    • Mixed-criticality scheduling.
  • MC2 design.
    • Scheduling and schedulability guarantees.
  • MC2 implementation.
    • OS concerns.
  • Future research directions.
basic task model assumed
Basic Task Model Assumed
  • Set  = {T = (T.e,T.p)} ofperiodic tasks.
    • T.e = T’s worst-case per-job execution cost.
    • T.p = T’s period & relative deadline.
    • T.u = T.e/T.p = T’s utilization.

5

2

T = (2,5)

One Core Here

U = (9,15)

job release

0

5

10

15

20

25

30

job deadline

processor 1

basic task model assumed1
Basic Task Model Assumed
  • Set  = {T = (T.e,T.p)} ofperiodic tasks.
    • T.e = T’s worst-case per-job execution cost.
    • T.p = T’s period & relative deadline.
    • T.u = T.e/T.p = T’s utilization.

5

2

T = (2,5)

One Core Here

U = (9,15)

job release

0

5

10

15

20

25

30

job deadline

processor 1

basic task model assumed2
Basic Task Model Assumed
  • Set  = {T = (T.e,T.p)} ofperiodic tasks.
    • T.e = T’s worst-case per-job execution cost.
    • T.p = T’s period & relative deadline.
    • T.u = T.e/T.p = T’s utilization.

5

2

T = (2,5)

One Core Here

U = (9,15)

job release

0

5

10

15

20

25

30

job deadline

processor 1

basic task model assumed3
Basic Task Model Assumed
  • Set  = {T = (T.e,T.p)} ofperiodic tasks.
    • T.e = T’s worst-case per-job execution cost.
    • T.p = T’s period & relative deadline.
    • T.u = T.e/T.p = T’s utilization.

5

2

T = (2,5)

2/5

One Core Here

U = (9,15)

job release

0

5

10

15

20

25

30

job deadline

processor 1

basic task model assumed4
Basic Task Model Assumed
  • Set  = {T = (T.e,T.p)} ofperiodic tasks.
    • T.e = T’s worst-case per-job execution cost.
    • T.p = T’s period & relative deadline.
    • T.u = T.e/T.p = T’s utilization.

This is an example of an earliest-deadline-first

(EDF) schedule.

5

2

T = (2,5)

One Core Here

U = (9,15)

job release

0

5

10

15

20

25

30

job deadline

processor 1

multiprocessor real time scheduling
Multiprocessor Real-Time Scheduling

Two Approaches:

Partitioning

Global Scheduling

Today, we’ll mainly consider

partitioned earliest-deadline-first (P-EDF)

and

global earliest deadline-first (G-EDF)

scheduling…

  • Steps:
  • Assign tasks to processors (bin packing).
  • Schedule tasks on each processor using uniprocessor algorithms.
  • Important Differences:
  • One task queue.
  • Tasks may migrate among the processors.
hard vs soft real time
Hard vs. Soft Real-Time
  • HRT: No deadline is missed.
    • We assume all HRT tasks commence execution at time 0 and periods are harmonic (smaller periods divide larger ones).
  • SRT: Deadline tardiness is bounded.
    • We allow SRT tasks to be non-harmonic and also sporadic (T’s job releases are at leastT.p time units apart).
    • A wide variety of global scheduling algorithms are capable of ensuring bounded tardiness with no utilization loss.
scheduling vs schedulability what s utilization loss
Scheduling vs. SchedulabilityWhat’s “Utilization Loss”?

Utilization loss occurs when test requires

utilizations to be restricted to get a “yes” answer.

no timing requirement

will be violated if  is

scheduled with EDF

a timing requirement

will (or may) be

violated …

  • W.r.t. scheduling, we actually care about two kinds of algorithms:
    • Scheduling algorithm(of course).
      • Example: Earliest-deadline-first (EDF): Jobs with earlier deadlines have higher priority.
    • Schedulability test.

yes

Test for

EDF

no

p edf util loss for both hrt and srt
P-EDFUtil. Loss for Both HRT and SRT

Example: Partitioning three tasks with parameters

(2,3) on two processors will overload one processor.

In terms of bin-packing…

  • Under partitioning & most global algorithms, overall utilization must be capped to avoid deadline misses.
    • Due to connections to bin-packing.
  • Exception: Global “Pfair” algorithms do not require caps.
    • Such algorithms schedule jobs one quantum at a time.
      • May therefore preempt and migrate jobs frequently.
      • Perhaps less of a concern on a multicore platform.
  • Under most global algorithms, if utilization is not capped, deadline tardinessis bounded.
    • Sufficient for soft real-time systems.

Task 1

1

Task 2

Task 3

0

Processor 2

Processor 1

g edf hrt loss srt no loss
G-EDFHRT: Loss, SRT: No Loss

deadline miss

Note: “Bin-packing” here.

Previous example scheduled under G-EDF…

  • Under partitioning & most global algorithms, overall utilization must be capped to avoid deadline misses.
    • Due to connections to bin-packing.
  • Exception: Global “Pfair” algorithms do not require caps.
    • Such algorithms schedule jobs one quantum at a time.
      • May therefore preempt and migrate jobs frequently.
      • Perhaps less of a concern on a multicore platform.
  • Under most global algorithms, if utilization is not capped, deadline tardinessis bounded.
    • Sufficient for soft real-time systems.

processor 1

processor 2

T1 = (2,3)

T2 = (2,3)

T3 = (2,3)

0

5

10

15

20

25

30

mixed criticality scheduling proposed by vestal 2007
Mixed-Criticality SchedulingProposed by Vestal [2007]
  • In reality, task execution times may vary widely.
  • Vestal noted that different design and analysis methods are used at different criticality levels.
    • Greater pessimism at higher levels.
  • He suggested reflecting such differences in schedulability analysis.

Average

Proven

Worst Case

Measured

Worst Case

mixed criticality scheduling proposed by vestal 20071
Mixed-Criticality SchedulingProposed by Vestal [2007]
  • Mixed-Criticality Task Model: Each task has an execution cost specified at each criticality level.
    • Costs at higher levels are (typically) larger.
    • Example:

T.eA = 20, T.eB = 12, T.eC = 5, …

    • Rationale: Will use more pessimistic analysis at high levels, more optimistic at low levels.
mixed criticality scheduling proposed by vestal 20072
Mixed-Criticality SchedulingProposed by Vestal [2007]

Some “weirdness” here: Not just one system

anymore, but several: the level-A system,

level-B,…

  • Mixed-Criticality Task Model: Each task has an execution cost specified at each criticality level.
    • Costs at higher levels are (typically) larger.
  • The task system is correctat level Xiffall level-X tasks meet their timing requirements assuming all tasks have level-X execution costs.
outline1
Outline
  • Background.
    • Real-time scheduling basics.
    • Real-time multiprocessor scheduling.
    • Mixed-criticality scheduling.
  • MC2 design.
    • Scheduling and schedulability guarantees.
  • MC2 implementation.
    • OS concerns.
  • Future research directions.
recall these assumptions
Recall These Assumptions…
  • Modest core count (e.g., 2-8).
  • Modest number of criticality levels (e.g., 2-5).
  • Statically prioritize criticality levels.
mc 2 m ixed c riticality on m ulti c ore our proposed mixed criticality architecture joint with ngc
MC2: Mixed-Criticality on MulticoreOur Proposed Mixed-Criticality Architecture (Joint with NGC)
  • We assume four criticality levels, A-D.
    • Originally, we assumed five, like in DO-178B.
  • We statically prioritize higher levels over lower ones.
  • We assume:
    • Levels A & B require HRT guarantees.
    • Level C requires SRT guarantees.
    • Level D is non-RT.
level a scheduling absolute under all circumstances hrt
Level-A Scheduling“Absolute, Under All Circumstances, HRT”
  • Partitioned scheduling is used, with a cyclic executive on each core.
    • Cyclic executive: offline-computed dispatchingtable.
    • Now the de facto standard in avionics.
    • Preferred by certification authorities.
    • But somewhat painful from a software-development perspective.
  • Require:
    • Total per-core level-A utilization  1 (assuming level-A execution costs).
    • Sufficient to ensure that tables can be constructed (no level-A deadline is missed).
so far
So Far…

Core 1

Core 2

Core 3

Core 4

higher

(static)

priority

CE

CE

CE

CE

Level A

Level B

Level C

lower

(static)

priority

Level D

level b scheduling hrt unless something really unexpected happens
Level-B Scheduling“HRT Unless Something Really Unexpected Happens”
  • Use P-EDF scheduling.
    • Prior work @ UNC has shown that P-EDF excels at HRT scheduling.
      • Harmonic periods  can instead use partitioned rate-monotonic (P-RM).
    • Eases the software-development pain of CEs.
  • Require:
    • Total per-core level-A+B utilization  1 (assuming level-B execution costs). (*)
    • Each level-B period is a multiple of the level-A hyperperiod(= LCM of level-A periods). (**)
      • More on this later.
    • (*) and (**)  if no level-A+B task ever exceeds its level-B execution cost, then no level-B task will miss a deadline.
so far1
So Far…

Core 1

Core 2

Core 3

Core 4

higher

(static)

priority

CE

CE

CE

CE

Level A

Level B

EDF/RM

EDF/RM

EDF/RM

EDF/RM

Level C

lower

(static)

priority

Level D

level c scheduling srt
Level-C Scheduling“SRT”
  • Use G-EDF scheduling.
    • Prior work @ UNC has shown that G-EDF excels at SRT scheduling.
  • Require:
    • Total level-A+B+C utilization  M= number of cores (assuming level-C execution costs).
    • Theorem[Prior work @ UNC]: G-EDF ensures bounded deadline tardiness, even on restricted-capacity processors, if no over-utilization and certain task util. restrictions hold.
    • These restrictions aren’t very limiting if level-A+B utilization (assuming level-C execution costs) is not large.
    • Bottom Line: If no level-A+B+C task ever exceeds its level-C execution cost, can ensure bounded tardiness at level C.
so far2
So Far…

Core 1

Core 2

Core 3

Core 4

higher

(static)

priority

CE

CE

CE

CE

Level A

Level B

EDF/RM

EDF/RM

EDF/RM

EDF/RM

Level C

G-EDF

lower

(static)

priority

Level D

level d scheduling best effort
Level-D Scheduling“Best Effort”
  • Ready level-D tasks run whenever there is no pending level-A+B+C work on some processor.
  • Seems suitable for potentially long-running jobs that don’t require RT guarantees.
full architecture
Full Architecture

Core 1

Core 2

Core 3

Core 4

higher

(static)

priority

CE

CE

CE

CE

Level A

Level B

EDF/RM

EDF/RM

EDF/RM

EDF/RM

Level C

G-EDF

lower

(static)

priority

Level D

Best Effort

remaining issues
Remaining Issues
  • Why the level-B period restriction?
  • Budgeting
  • Slack shifting.
example task system all tasks on the same processor1
Example Task System(All Tasks on the same Processor)

This violates our level-B

period assumption.

example task system all tasks on the same processor2
Example Task System(All Tasks on the same Processor)

Each level-A task has a different execution cost

(and hence utilization) per level.

Execution costs (and hence utilizations) are larger

at higher levels.

level a system
“Level-ASystem”

= job release

= job deadline

T1 = (5,10)

T2= (10,20)

0 5 10 15 20 25 30 35 40

Higher levels have statically higher priority.

Thus, level-A tasks are oblivious of other tasks.

level b system
“Level-BSystem”

= job release

= job deadline

T1 = (3,10)

T2= (6,20)

T3 = (2,10)

T4= (8,40)

0 5 10 15 20 25 30 35 40

T3misses some deadlines.

This is because our period constraint for level B is violated.

We can satisfy this constraint by “slicing” each job of T2 into two “subjobs”…

level b system1
“Level-BSystem”

= job release

= job deadline

T1 = (3,10)

T2= (3,10)

T3 = (2,10)

T4= (8,40)

0 5 10 15 20 25 30 35 40

All deadlines are now met.

budgeting
Budgeting
  • MC2 optionally supports budget enforcement.
    • The OS treats a level-X task’s level-X execution time as a budget that cannot be exceeded.

Job 1

Job 2

Provisioned

Job 3

Without BE

With BE

0 5 10 15 20 25 30 35 40

budgeting1
Budgeting
  • MC2 optionally supports budget enforcement.
    • The OS treats a level-X task’s level-X execution time as a budget that cannot be exceeded.
    • Pro: Makes “job slicing” easier.
    • Con: Can make the system less resilient in dealing with breeches of execution-time assumptions.
      • More on this later.
slack shifting
Slack Shifting
  • Provisioned execution times at higher criticality levels may be very pessimistic.
    • At level A, WCETs (worst-case execution times) may be much higher than actual execution times.
  • Thus, higher levels produce “slack” that can be consumed by lower levels.
    • Could lessen tardiness at level C…
    • … or increase throughput at level D.
slack shifting cont d
Slack Shifting (Cont’d)
  • Slack Shifting Rule:
    • When a level-X job under-runs its level-X execution time, it becomes a “ghost job.”
    • When a ghost job is scheduled, its processor time is donated to lower levels.
  • Such donations do not invalidate correctness properties.
a very simple example
A (Very) Simple Example

execution time (at level B)

Level B

Donated

Level C

0

5

10

15

20

25

30

without slack shifting
Without Slack Shifting

Remember, could be huge.

execution time (at level B)

Level B

finishes later

Level C

0

5

10

15

20

25

30

spare capacity reclamation summary
Spare Capacity Reclamation Summary
  • MC2 does this in two ways:
    • At design time:This occurs when we validate level-X RT guarantees.
      • This is done assuming execution costs “appropriate” for level X.
    • At run time:MC2 uses slack shifting to re-allocate available slack.
      • This exploits the fact that actual execution costs (at whatever level) may be less than provisioned costs.
outline2
Outline
  • Background.
    • Real-time scheduling basics.
    • Real-time multiprocessor scheduling.
    • Mixed-criticality scheduling.
  • MC2 design.
    • Scheduling and schedulability guarantees.
  • MC2 implementation.
    • OS concerns.
  • Future research directions.
litmus rt implementation of mc 2
LITMUSRT Implementation of MC2
  • Real-time Linux extension originally developed at UNC.
    • (Multiprocessor) scheduling and synchronization algorithms implemented as “plug-in” components.
    • Developed as a kernel patch (currently based on Linux 3.10.5).
    • Code is available at http://www.litmus-rt.org/.

Linux Testbed for Multiprocessor Scheduling

in Real-Time systems

mc 2 litmus rt plugin
MC2 LITMUSRTPlugin
  • Plugin is container-based.
    • One level-A and -B container per core; one level-C and -D container for the whole system.
    • Each container schedules its constituent tasks; Linux schedules level-D tasks.
mc 2 litmus rt plugin1
MC2 LITMUSRTPlugin
  • Plugin is container-based.
  • In developing an MC2plugin, we sought to address two questions:
    • How to best implement MC2so that high levels are shielded from OS overheads caused by low levels?
    • How robust is MC2? That is, even though the theory views MC2 as four different systems (level A, B, C, and D) does the “almost” level-C (or -B) system behave reasonably?
implementation issues
Implementation Issues
  • Higher criticality levels can be adversely impacted by the following:
    • Lock delays in the OS caused by low levels.
      • Solution: Re-factor kernel data structures and use fine-grained locking and wait-free techniques.
    • Interrupts caused by low levels.
      • Solution: Direct all interrupts to one processor.
    • Timer events caused by low levels.
      • Solution: Enable timer events that are “close together” to be merged and serviced in criticality order.
these techniques matter
These Techniques Matter
  • To show this, we experimented with “avionics-like” synthetic task systems.
    • Levels A and B consume 20% of the available capacity on average.
    • Levels A and B have harmonic periods.
    • All periods range over [10,100]ms.
    • Number of tasks varied.
  • Assessed scheduling and release overhead on a six-core system.
example graph worst case release overhead1
Example GraphWorst-Case Release Overhead

Worst-Case Level-A Release Overhead

With No Techniques Enabled

example graph worst case release overhead2
Example GraphWorst-Case Release Overhead

Worst-Case Level-A Release Overhead

With All Techniques Enabled

example graph worst case release overhead3
Example GraphWorst-Case Release Overhead

Techniques Reduced Level-A

Release Overhead for the

80-Task Microbenchmark

from about 27s to about 10s

general conclusions
General Conclusions
  • The proposed techniques lower both worst-case and average-case overheads.
    • Worst-case improvement benefits levels A and B.
    • Average-case improvement benefits level C.
      • Assuming an average- or near-average-case provisioning is suitable for level C.
  • Note: In schedulability analysis, overheads are dealt with by inflating execution times. Some overheads result in multiple inflations.
robustness experiments
Robustness Experiments
  • Again, used synthetic, “avionics-like” task systems.
  • Provisioning:
    • T.eC: Observed average case.
    • T.eB: Observed worst case  10T.eC.
    • T.eA: Twice level B.
  • Configured actual task execution times using  distribution with
    • mean = T.eCand
    • standard dev. s.t. P[exec  T.eB]  0.01.
perturbing the system
Perturbing the System
  •  distr. values range over (0,1) and must be scaled to produce exec. times.
  • In the “expected” system, a  value of
    • 0.05 corresponds to a level-C cost,
    • 0.5 corresponds to a level-B cost, and
    • 1.0 corresponds to a level-A cost.
  • We created “unexpected” systems by
    • allowing some tasks (either 10% or 50%) to be aberrant, and
    • having aberrant tasks use a higher  mean.
perturbing the system1
Perturbing the System
  •  distr. values range over (0,1) and must be scaled to produce exec. times.
  • In the “expected” system, a  value of
    • 0.05 corresponds to a level-C cost,
    • 0.5 corresponds to a level-B cost, and
    • 1.0 corresponds to a level-A cost.
  • We created “unexpected” systems by
    • allowing some tasks (either 10% or 50%) to be aberrant, and
    • having aberrant tasks use a higher  mean.
example graph avg rel response time 10 aberrant1
Example GraphAvg. Rel. Response Time, 10% aberrant

Average Relative Response Time

(Rel. Resp. Time = Resp. Time / Period)

example graph avg rel response time 10 aberrant3
Example GraphAvg. Rel. Response Time, 10% aberrant

The “Expected” System

(On Average, All Costs  Level-C Costs)

example graph avg rel response time 10 aberrant4
Example GraphAvg. Rel. Response Time, 10% aberrant

On Average, Aberrant Tasks Execute

Twice as Long as Expected

example graph avg rel response time 10 aberrant5
Example GraphAvg. Rel. Response Time, 10% aberrant

On Average, Aberrant Tasks Execute

10 times as Long as Expected

(These are Level-B Costs)

example graph avg rel response time 10 aberrant6
Example GraphAvg. Rel. Response Time, 10% aberrant

On Average, Aberrant Tasks Execute

20 times as Long as Expected

(These are Level-A Costs)

example graph avg rel response time 10 aberrant7
Example GraphAvg. Rel. Response Time, 10% aberrant

Average Relative Response Time,

Level C, With Budget Enforcement

example graph avg rel response time 10 aberrant8
Example GraphAvg. Rel. Response Time, 10% aberrant

Average Relative Response Time,

Level C, No Budget Enforcement

example graph avg rel response time 10 aberrant9
Example GraphAvg. Rel. Response Time, 10% aberrant

Average Relative Response Time,

Level B, With Budget Enforcement

example graph avg rel response time 10 aberrant10
Example GraphAvg. Rel. Response Time, 10% aberrant

Average Relative Response Time,

Level B, No Budget Enforcement

example graph avg rel response time 10 aberrant12
Example GraphAvg. Rel. Response Time, 10% aberrant

Let’s Zoom In to See Level C Better…

general conclusions1
General Conclusions
  • MC2 seems fairly robust when reality doesn’t match expectations.
  • Turning off budget enforcement seems helpful.
    • But, certification authorities probably like budget enforcement.
outline3
Outline
  • Background.
    • Real-time scheduling basics.
    • Real-time multiprocessor scheduling.
    • Mixed-criticality scheduling.
  • MC2 design.
    • Scheduling and schedulability guarantees.
  • MC2 implementation.
    • OS concerns.
  • Future research directions.
other work
Other Work
  • Ongoing:
    • Add shared cache management.
      • An initial prototype was discussed at ECRTS’13.
  • Future:
    • Add support for synchronization.
    • Add support for dynamic adaptations at level C.
    • Study other combinations of per-level schedulers.
mc 2 papers all papers available at http www cs unc edu anderson papers html
MC2 Papers(All papers available at http://www.cs.unc.edu/~anderson/papers.html)
  • J. Anderson, S. Baruah, and B. Brandenburg, “Multicore Operating-System Support for Mixed Criticality,” Proc. of the Workshop on Mixed Criticality: Roadmap to Evolving UAV Certification, 2009.
    • A “precursor” paper that discusses some of the design decisions underlying MC2.
  • M. Mollison, J. Erickson, J. Anderson, S. Baruah, and J. Scoredos, “Mixed Criticality Real-Time Scheduling for Multicore Systems,” Proc. of the 7th IEEE International Conf. on Embedded Software and Systems, 2010.
    • Focus is on schedulability, i.e., how to check timing constraints at each level and “shift” slack.
  • J. Herman, C. Kenna, M. Mollison, J. Anderson, and D. Johnson, “RTOS Support for Multicore Mixed-Criticality Systems,” Proc. of the 18th IEEE Real-Time and Embedded Technology and Applications Symp., 2012.
    • Focus is on RTOS design, i.e., how to reduce the impact of RTOS-related overheads on high-criticality tasks due to low-criticality tasks.
  • B. Ward, J. Herman, C. Kenna, and J. Anderson, “Making Shared Caches More Predictable on Multicore Platforms,” Proc. of the 25th Euromicro Conference on Real-Time Systems, 2013.
    • Adds shared cache management to a two-level variant of MC2.
mixed criticality work by others
Mixed-Criticality Work by Others
  • For a good, recent survey on mixed-criticality work by others, I suggest:
    • “Mixed Criticality Systems - A Review,” by Alan Burns and Rob Davis, University of York, U.K., July 2013.
    • Available at: http://www-users.cs.york.ac.uk/~burns/review.pdf.
thanks
Thanks!
  • Questions?