1 / 59

From Simulink to Lustre to TTA : a layered approach for distributed embedded applications

From Simulink to Lustre to TTA : a layered approach for distributed embedded applications. Stavros Tripakis VERIMAG Joint work with: Paul Caspi, Adrian Curic, Aude Maignan, Christos Sofronis. Problem and approach. How to develop embedded software ?

Lucy
Download Presentation

From Simulink to Lustre to TTA : a layered approach for distributed embedded applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. From Simulink to Lustre to TTA:a layered approach for distributed embedded applications Stavros Tripakis VERIMAG Joint work with: Paul Caspi, Adrian Curic, Aude Maignan, Christos Sofronis

  2. Problem and approach • How to develop embedded software ? • Safely: safety-critical applications • Efficiently: development cost, time-to-market • “Model-based” approach: • High-level design models • Analysis techniques: catch bugs early • Synthesis techniques: correct-by-construction implementations

  3. Models Programming languages OS, middleware, HW architecture Our view: a development process in three layers, supported by: Design Implement Execute Automation is key! Semantic preservation!

  4. Our work Simulink/Stateflow Lustre TTA • European IST projects: • “NEXT TTA” (2002-2004) and “RISE” (2003-2005). • Automotive applications: • Audi.

  5. A Simulink model (parts)

  6. Time Triggered Architecture (TTA) • Picture of TTA • Time-triggered: • Processors synchronize their clocks. • Static TDMA non-preemptive scheduling for tasks running on processors and messages transmitted on the bus. • Fault-tolerance services

  7. Why these choices? De-facto standard in automotive Simulink/Stateflow Formal semantics, analysis tools, C code generators Lustre TTA Close to synchronous semantics, Audi likes it

  8. Simulink/Stateflow Lustre TTA The development process • Design/simulate controller in Simulink/Stateflow. • Translate it to Lustre. • Verify the Lustre program (transparent). • Distribute the Lustre program on TTA. • Generate C code, compile and run.

  9. Plan of talk • Translating Simulink to Lustre. • Distributing Lustre on TTA. • Tool-chain and case studies.

  10. Plan of talk • Translating Simulink to Lustre. • Distributing Lustre on TTA. • Tool-chain and case studies.

  11. Translating Simulink to Lustre • We only translate discrete-time Simulink: the controller to be implemented. • Goal: preserve semantics of Simulink. What semantics?

  12. Simulink semantics • Informal: described in documentation. • Defined by the simulator. • Multiple different semantics: user options.

  13. Translation goal • input/output semantics of generated Lustre program = • input/output behaviour of original Simulink model given by Mathworks simulator, • assuming a fixed set of user-defined options

  14. From Simulink to Lustre • A glance into Lustre • Translation: • Type inference • Clock inference • Hierarchical block-by-block translation

  15. inputs step function (transition) outputs memory (state) A glance into Lustre • A Lustre program models an I/O automaton: • Implementing a Lustre program: • Read inputs; • Compute next state and outputs; • Write outputs; • Update state; Repeat at every “trigger” (external event).

  16. A glance into Lustre • A simple Lustre program: • No inputs • Output: x • State: pre(x) (previous value of x) node Counter() returns(x:int); let x = 0 -> pre(x) + 1; tel

  17. clock(x) = basic clock(y) = b A glance into Lustre • Multi-clocked (e.g., multi-periodic) systems: x = 0 -> pre(x) + 2; b = true -> not pre(b); y = x when b;

  18. s x v y A B u z w C Simulink versus Lustre • Both data-flow style. • Both hierarchical: • Graphical versus textual. • Different type mechanisms: • Mandatory/explicit in Lustre, not in Simulink. • Different timing mechanisms: • Implicit logical clocks in Lustre. • Sample times and triggers in Simulink.

  19. Translation steps • Type inference • Clock inference • Hierarchical block-by-block translation

  20. Translation steps • Type inference • Clock inference • Hierarchical block-by-block translation

  21. Simulink types • Types are not mandatory in Simulink. • Available types: double, single, int32, int16, int8, …, boolean. • By default signals are “double”. • Basic block type signatures:

  22. error x + z … single int8 y double boolean _ | Simulink type inference • Fix-point computation on a lattice: • E.g.: Fix-point equations: tx = sup(double, ty, tz) ty = sup(double, tx, tz) tz = sup(double, tx, ty) Least fix-point: tx = ty = tz = double

  23. error x + z … single int8 y double boolean _ | Simulink type inference • Fix-point computation on a lattice: • E.g.: int8 Fix-point equations: tx = int8 ty = sup(double, tx, tz) tz = sup(double, tx, ty) Least fix-point: tx = ty = tz = int8

  24. error … single int8 double boolean _ | Simulink type inference • Fix-point computation on a lattice: • E.g.: x z w not + y Fix-point equations: tx = sup(double, ty, tz) ty = sup(double, tx, tz) tz = sup(double, tx, ty, boolean, tw) tw = sup(boolean, tz)

  25. error … single int8 double boolean _ | Simulink type inference • Fix-point computation on a lattice: • E.g.: x z w not + y Fix-point equations: tx = sup(double, ty, tz) ty = sup(double, tx, tz) tz = sup(double, tx, ty, boolean, tw) tw = sup(boolean, tz) tz = error

  26. The overall algorithm • Generate fix-point equations. • Find least fix-point. • If error, reject model. • Otherwise, map Simulink types to Lustre types: • double, single: real • int32, int16, int8, … : int • boolean: bool

  27. Translation steps • Type inference • Clock inference • Hierarchical block-by-block translation

  28. Time in Lustre • One mechanism (clocks) + one rule: • Cannot combine signals of different clocks: x = 0 -> pre(x) + 2; b = true -> not pre(b); y = x when b; z = x + y; Compiler error

  29. Time in Simulink • Simulink has two timing mechanisms: • sample times : (period,phase) • Can be set in blocks: in-ports, UD, ZOH, DTF, … • Defines when output of block is updated. • Can be inherited from inputs or parent system. • triggers : • Set in subsystems • Defines when subsystem is “active” (outputs updated). • The sample times of all children blocks are inherited. x s Simulink triggers = Lustre clocks trigger y A z w B

  30. Time in Simulink • Greatest-common divisor (GCD) rule : • A block fed with inputs with different rates: • Other timing rules, e.g.: • Insert a unit delay when passing from a “slow” block to a “fast” block. x z 2 ms 1 ms y 3 ms

  31. Overview of clock inference algorithm • Infer the sample time of every Simulink signal. • Check Simulink’s timing rules. • Create Lustre clocks for Simulink sample times and triggers. • Basic clock: GCD of all sample times, e.g., 1ms. • Other clocks: multiples of basic clock, e.g. true false true false L = 2ms.

  32. Sample time inference • Basic idea: same as type inference. • Poset with pairs (period, phase). • No error element. • p1  p2 if p1 is “multiple” of p2 • Complex definition of multiple because of phase. • Sup is “GCD”. • Although poset is infinite, termination guaranteed: • Can remain within a finite part of the poset (set of all sample times in the Simulink model, closed by GCD).

  33. Translation steps • Type inference • Clock inference • Hierarchical block-by-block translation

  34. Hierarchical translation • A Simulink model can be seen as a tree: • root system, subsystems, basic blocks (leaves). • A simple block (+, gain, …) is translated to a basic Lustre operator (+, , …). • Complex blocks (transfer functions, …) are translated into Lustre nodes. • Subsystems are translated into Lustre nodes.

  35. x s A B v y u z C w Bottom-up translation Lustre program node A(x,y) returns(s); … node B(s,u) returns(v); … node C(z) returns(u,w); … node Root(x,y,z) returns(v,w); var s, u; let s = A(x,y); v = B(s,u); (u,w) = C(z); tel Simulink model

  36. Plan of talk • Translating Simulink to Lustre. • Distributing Lustre on TTA. • Tool-chain and case studies.

  37. Distributing Lustre on TTA • A resource allocation problem: • computation is not free • communication is not free • First, a description problem: • express available/required resources • use annotations (“pragmas”) to do this • Then, the distribution problem: • map Lustre code to TTA tasks and messages • schedule the tasks on processors and messages on the bus

  38. Distributing Lustre on TTA • Extend Lustre with annotations. • Decompose a Lustre program into tasks. • Schedule the tasks.

  39. Distributing Lustre on TTA • Extend Lustre with annotations. • Decompose a Lustre program into tasks. • Schedule the tasks.

  40. Annotations: code distribution y = A(x); (location = P1) • Meaning: • The execution of Lustre node A is done at TTA processor P1. • The output y is produced at P1. • If the input x is produced elsewhere, it must be transmitted to P1.

  41. Annotations: timing assumptions exec-time(A) in [10,20] • Meaning: • BCET(A)=10 and WCET(A)=20. • A is a Lustre node, execution time is given for the C code generated from A (assumed to be executed atomically). • Different execution times for A can be given in case A is run on different processors or call by different nodes (e.g., with different inputs). Note: rely on external tools for ET analysis.

  42. Annotations: timing requirements date(y) - date(x)  10 • Meaning: • The delay from availability of x until availability of y must be at most 10 time units (deadline). • Availability: • input variables are available when they are read, • internal variables when they are computed, • output variables when they are written.

  43. Distributing Lustre on TTA • Extend Lustre with annotations. • Decompose a Lustre program into tasks. • Schedule the tasks.

  44. Decomposing Lustre into tasks Call graph and dependencies of a Lustre program: Node B calls B1 and B2 B2 depends on results from B1

  45. Decomposing Lustre into tasks Call graph and dependencies of a Lustre program: Should the entire node B be one task?

  46. Decomposing Lustre into tasks Call graph and dependencies of a Lustre program: Or should there be two tasks B1 and B2 ?

  47. Decomposing Lustre into tasks • Two extremes: • One task per TTA processor: too coarse, perhaps no feasible schedule (pre-emption not allowed). • One task for every Lustre operator: too fine, scheduling too costly (too many tasks). • Approach: • Start with coarse partition. • Split when necessary (no feasible schedule), based on feedback from the scheduler. • Feedback: heuristics.

  48. Distributing Lustre on TTA • Extend Lustre with annotations. • Decompose a Lustre program into tasks. • Schedule the tasks.

  49. Scheduling • Schedule tasks on each processor. • Schedule messages on the bus. • Static TDMA schedules (both for bus and processors). • No pre-emption (problem known NP-hard). • Algorithm: • Branch-and-bound to fix order of tasks/messages. • Solve a linear program on leaves to find start times. • Ensures deadlines are met 8 possible exec. times.

  50. T1! T4, T3! T5 T3! T4 T4! T3 T1! T2 LP Scheduling Infeasible (necessary conditions) total order

More Related