1 / 54

Predictable Integration of Safety-Critical Software on COTS- based Embedded Systems

Predictable Integration of Safety-Critical Software on COTS- based Embedded Systems. Marco Caccamo University of Illinois at Urbana-Champaign. Outline. Motivation Memory -centric scheduling theory PRedictable Execution Model (PREM) Multicore memory-centric scheduling

leone
Download Presentation

Predictable Integration of Safety-Critical Software on COTS- based Embedded Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Predictable Integration of Safety-Critical Software on COTS-based Embedded Systems Marco Caccamo University of Illinois at Urbana-Champaign

  2. Outline • Motivation • Memory-centric scheduling theory • PRedictableExecution Model (PREM) • Multicore memory-centric scheduling • Single Core Equivalence (SCE) • Memory bandwidth Isolation (MemGuard) • Cache space management (Colored Lockdown)

  3. Real-Time Applications • Resource intensive real-time applications • Real-time sensor fusion and object tracking, multimedia processing(*), real-time data analytic(**) (*) ARM, QoS for High-Performance and Power-Efficient HD Multimedia, 2010 (**) Intel, The Growing Importance of Big Data and Real-Time Analytics, 2012

  4. Modern System-on-Chip (SoC) • More cores • Freescale P4080 has 8 cores • More hardware sharing • Shared memory hierarchy (LLC, MC, DRAM) • Shared I/O channels Multicore chips offer high performance but predictability and safety are a serious concern More performance Less energy, Less cost But, isolation?

  5. SoC: challenges for RT safety-critical systems • In a multicore chip, memory controllers, last level cache, memory, on chip network and I/O channels are globally shared by cores. Unless a globally shared resource is over provisioned, it must be partitioned/reserved/scheduled. Otherwise • Complexity concerns: The schedulability analysis, testing and temporal certification of software running on one core will also depend on tasks running on other cores • Safety concerns: The change of software in one core could cause the tasks in other coresto miss their deadlines. This is unacceptable for safety critical systems!

  6. Problem: Shared Memory Hierarchy in Multicore App 3 App 4 App 2 App 1 Problem: • Shared hardware resources • OS has little control Core3 Core1 Core4 Core2 Space sharing Shared Last Level Cache (LLC) Access contention Memory Controller (MC) Shared banks contention DRAM On-chip network is a shared resource, but it is currently overprovisioned in embedded systems (e.g., Freescale P4080). Hence, I will not discuss about on chip network since it is still far from being a bottleneck in all of our existing studies.

  7. The Need for Engineering Solutions Solutions: • a new clean-slate real-time scheduling approach called memory-centric scheduling theory. • the idea is to effectively co-schedule cores activity, their memory usage, and the I/O channels • memory accesses are scheduled at high level (coarse granularity) • it uses the Predictable Execution Model (PREM, RTAS’11) • an engineering solution, named Single Core Equivalence (SCE): • control cores’ memory bandwidth usage (MemGuard) • manage cache space in a predictable manner (Colored Lockdown) • use a DRAM bank-aware memory allocator (RTAS’14, Univ. of Kansas) These are all software based solutions that require minimal or no modification to the HW. 26

  8. Outline • Motivation • Memory-centric scheduling theory • PRedictableExecution Model (PREM) • Multicore memory-centric scheduling • Single Core Equivalence (SCE) • Memory bandwidth Isolation (MemGuard) • Cache space management (Colored Lockdown)

  9. A Motivating Experiment on Single Core: Task and Peripherals contending for memory • Experiment on Intel Platform, typical embedded system speed. • PCI-X 133Mhz, 64 bit fully loaded by traffic generator peripheral. • Task suffers continuous cache misses. • Up to 44% wcet increase. 8

  10. Predictable Execution Model (PREM single core) • (The idea)The execution of a task can be distinguished between a memory intensive phase (with cache prefetching) and a local computation phase (with cache hits) • (The benefit)High-level coscheduling can be enforced among all active components of a COTS system  contention for accessing shared resources is implicitly resolved by the high-level coscheduler without relaying on low level arbiters R. Pellizzoni, E. Betti, S. Bak, G. Yao, J. Criswell, M. Caccamo, R. Kegley, "A Predictable Execution Model for COTS-based Embedded Systems", Proceedings of 17th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), Chicago, USA, April 2011. 30

  11. Real-Time Bridge: a prototype • Xilinx TEMAC 1Gb/s ethernet card (integrated on FPGA). • Optimized virtual driver implementation with no software packet copy (PowerPC running Linux). • Full VHDL HW code and SW implementation available. 34

  12. Multicore memory-centric scheduling • It uses the PREM task model: each task is composed by a sequence of intervals, each including a memory phase followed by a computation phase. • It enforces a coarse-grain TDMA schedule for granting memory access to each core. • Each core can be analyzed in isolation as if tasks were running on a “single-core equivalent” platform. G. Yao, R. Pellizzoni, S. Bak, E. Betti, and M. Caccamo, "Memory-centric scheduling for multicore hard real-time systems", Real-Time Systems Journal, Vol. 48, No. 6, pp. 681-715, November 2012.

  13. Two cores example: TDMA slot of core 1 J1 J2 J3 4 8 12 0 memory phase computation phase With a coarse-grained TDMA, tasks on one core can perform the memory access only when the TDMA slot is granted Core Isolation

  14. Memory-centric scheduling: three rules • Assumption: fixed priority, partitioned scheduling • Rule 1: enforce a coarse-grain TDMA schedule among the cores for granting access to main memory; • Rule 2: raise scheduling priority of memory phases over execution phases when TDMA memory slot is granted; • Rule 3: memory phases are non-preemptive.

  15. Raise priority of mem. phases during TDMA slot J1 J2 J3 4 8 12 0 memory phase computation phase J1 J2 J3

  16. Make memory phases non-preemptive J1 J2 J3 4 8 12 0 J1 J2 J3 4 8 12 0

  17. Summary of two cores example Rule 1 – TDMA memory schedule Rule 2 – Prioritize memory phases during a TDMA memory slot Rule 3 – memory phases are non-preemptive

  18. Schedulabilityof synthetic tasks Schedulability ratio Memory Util Core Util In an 8-core, 10-task system, the memory-centric scheduling bound is superior to the contention-based scheduling bound.

  19. Schedulabilityof synthetic tasks Schedulability ratio Ratio = .5 Memory Util Core Util The contour line at 50% schedulable level

  20. Outline • Motivation • Memory-centric scheduling theory • PRedictableExecution Model (PREM) • Multicore memory-centric scheduling • Single Core Equivalence (SCE) • Memory bandwidth Isolation (MemGuard) • Cache space management (Colored Lockdown)

  21. Single Core Equivalence (SCE) • Goal of Single Core Equivalence: • allow industry to reuse each core of a multicore chip as if it was a core in the conventional single core one. • allow the reuse of not only software but also the development, schedulabilityanalysis and certification process as is. • Technology that implements Single Core Equivalence (SCE): • control cores’ memory bandwidth usage (MemGuard) • manage cache space in a predictable manner (Colored Lockdown) • use a DRAM bank-aware memory allocator (RTAS’14, Univ. of Kansas) See: http://rtsl-edge.cs.illinois.edu/SCE/ 26

  22. Memory Interference • Key observations: • Memory bandwidth(variable) != CPU bandwidth (constant) • Memory controller  queuing/access delay is unpredictable foreground X-axis background 470.lbm Foreground slowdown ratio (2.1GB/s) Core Core L2 L2 Shared Memory Intel Core2 (1.6GB/s) (1.5GB/s) (1.5GB/s) (1.4GB/s)

  23. Memory Access Pattern • Memory access patterns vary over time • Static resource reservation is inefficient LLC misses LLC misses Time(ms) Time(ms)

  24. Memory Bandwidth Isolation • MemGuard provides an OS mechanism to enforce memory bandwidth reservation for each core H. Yun, G. Yao, R. Pellizzoni, M. Caccamo, L. Sha, "MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isolation in Multi-core Platforms", IEEE RTAS, April 2013.

  25. MemGuard • Characteristics • Memory bandwidth reservation system • Memory bandwidth: guaranteed + best-effort • Prediction based dynamic reclaiming for efficient utilization of guaranteed bandwidth • Maximize throughput by utilizing best-effort bandwidth whenever possible • Goal • Minimum memory performance guarantee • A dedicated (slower) memory system for each core in multi-core systems

  26. Memory Bandwidth Reservation • Idea • Control interference by regulating per-core memory traffic • OS monitors and enforces each core’s memory bandwidth usage • Using per-core HW performance counter(PMC) and scheduler Enqueue tasks 2 1 Budget Core activity 0 10 20 Dequeue tasks Dequeue tasks computation memory fetch

  27. Guaranteed Bandwidth: rmin • Definition • Minimum memory transfer rate • when requests are back-logged in the DRAM controller • worst-case access pattern: same bank & row miss • Example (PC6400-DDR2*) • Peak B/W: 6.4GB/s • Measured minimum B/W: 1.2GB/s (*) PC6400-DDR2 with 5-5-5 (RAS-CAS-CL latency setting)

  28. Memory Bandwidth Reservation • System-wide reservation rule • up to the guaranteed bandwidth rmin m: #of cores • Memguard approximates a dedicated (ideal) memory subsystem • bandwidth: Bi (bytes/sec) • latency: 1/Bi (sec/byte) • current tick granularity: 1msec (replenishment period)

  29. Memory Bandwidth Reclaim • Key objective • Utilize guaranteed bandwidth efficiently • Regulator • Predicts memory usage based on history • Donates surplus to the reclaim manager at the beginning of every period • When remaining budget (assigned – donated) is depleted, tries to reclaim from the reclaim manager • Reclaim manager • Collects the surplus from all cores • Grants reclaimed bandwidth to individual cores on demand

  30. Hard/Soft Reservation on MemGuard • Hard reservation (w/o reclaiming) • Guarantee memory bandwidth Bi regardless of other cores • Selectively applicable on per-core basis • Soft reservation (w/ reclaiming) • Does not guarantee reserved bandwidth due to potential misprediction • Error cases can occur due tomisprediction • Error rate is small (shown in evaluation) • Best-effort bandwidth • After all cores use their given budgets, and before the next period begins, MemGuard broadcasts all cores to continue to execute

  31. Evaluation Platform • Intel Core2Quad 8400, 4MB L2 cache, PC6400 DDR2 DRAM • Modified Linux kernel 3.6.0 + MemGuard kernel module • https://github.com/heechul/memguard/wiki/MemGuard • Used the entire 29 benchmarks from SPEC2006 and synthetic benchmarks Intel Core2Quad Core 2 Core 3 Core 0 Core 1 L1-I L1-D L1-I L1-D L1-I L1-D L1-I L1-D L2 Cache L2 Cache System Bus DRAM

  32. Isolation Effect of Reservation • Sum b/w reservation rmin (1.2GB/s) Isolation • 1.0GB/s(X-axis) + 0.2GB/s(lbm) = rmin Isolation Core 2: 0.2 – 2.0 GB/s for lbm Solo IPC@1.0GB/s Core 0: 1.0 GB/s for X-axis

  33. Effects of Reclaiming and Spare Sharing • Guarantee foreground (SPEC@1.0GB/s) • Improve throughput of background (lbm@0.2GB/s): 368%

  34. Outline • Motivation • Memory-centric scheduling theory • PRedictableExecution Model (PREM) • Multicore memory-centric scheduling • Single Core Equivalence (SCE) • Memory bandwidth Isolation (MemGuard) • Cache space management (Colored Lockdown)

  35. LVL3 Cache & Storage Interference • Inter-core interference • The biggest issue wrt software certification • Fetches by one core might evict cache blocks owned by another core • Hard to analyze! • Inter-task/inter-partition interference • Intra-task interference • Both present in single-core systems too; intra-task interference is mainly a result of cache self-eviction.

  36. Inter-Core Interference: Options • Private cache • It is often not the case: majority of COTS multicore platforms have last level cache shared among cores • Cache-Way Partitioning • Easy to apply, but inflexible • Reducing number of ways per core can greatly increase cache conflicts • Colored Lockdown • Our proposed approach • Use coloring to solve cache conflicts • Fine-grained assignment of cache resources (page size – 4Kbytes) • Use cache locking instructions to lock “hot” pages of rt critical tasks  locked pages can not be evicted from cache R. Mancuso, R. Dudko, E. Betti, M. Cesati, M. Caccamo, R. Pellizzoni, "Real-Time Cache Management Framework for Multi-core Architectures", IEEE RTAS, Philadelphia, USA, April 2013.

  37. How Coloring Works • The position inside the cache of a cache block depends on the value of index bits within the physical address. • Key idea: the OS decides the physical memory mapping of task’s virtual memory pages  manipulate the indexes to map different pages into non-overlapping sets of cache lines (colors)

  38. How Coloring Works • The position inside the cache of a cache block depends on the value of index bits within the physical address. • Key idea: the OS decides the physical memory mapping of task’s virtual memory pages  manipulate the indexes to map different pages into non-overlapping sets of cache lines (colors)

  39. How Coloring Works • You can think of a set associative cache as an array… 32 ways . . . 16 colors

  40. How Coloring Works • You can think of a set associative cache as an array… • Using only cache-way partitioning, you are restricted to assign cache blocks by columns. • Note: assigning one way turns it into a direct-mapped cache! . . .

  41. How Coloring + Locking Works • You can think of cache as an array… • Combining coloring and locking, you can assign arbitrary position to cache blocks independently of replacement policy . . .

  42. Colored Lockdown Final goal • Aimed model - suffer cache misses in hot memory regions only once: • During the startup phase, prefetch & lockthe hot memory regions • Sharp improvement in terms of inter-core isolation andschedulability T1 CPU1 T2 CPU2 startup T1 CPU1 memory access T2 CPU2 T2 CPU2 execution

  43. Detecting Hot Regions • In the general case, the size of the cache is not enough to keep the working set of all running rt critical tasks. • For each rt critical task, we can identify some high usagevirtual memory regions, called: hot memory regions ( ). Such regions can be identified through profiling. • Critical tasks do NOT color dynamically linked libraries.Dynamic memory allocation is allowed only during the startup phase.

  44. Detecting Hot Regions • The final memory profile will look like: • Where A, B, … is the page ranking; • Where “#” is the section index; • It can be fed into the kernel to perform selective Colored Lockdown • How many pages should be locked per process? •  Task WCET reduction as function of locked pages has approximately a convex shape; convex optimization can be used for allocating cache among rt critical tasks # + page offset A B C D E I K O P Q 1 + 0x0002 1 + 0x0004 25 + 0x0000 1 + 0x0001 25 + 0x0003 3 + 0x0000 4 + 0x0000 6 + 0x0002 1 + 0x0005 1 + 0x0000 ...

  45. EEMBC Results • EEMBC Automotive benchmarks • Benchmarks converted into periodic tasks • Each task has a 30 msperiod • ARM-based platform • 1 GHz Dual-core Cortex-A9 CPU • 1 MB L2 cache + private L1 (disabled) • Tasksobserved on Core 0 • Each plotted sample summarizes execution of 100 jobs • Interferencegenerated with synthetic tasks on Core 1

  46. EEMBC Results • Angle to time conversion benchmark (a2time) • Baseline reached when 4 hot pages are locked / 81% accesses caught

More Related