1 / 21

TTP and FlexRay

TTP and FlexRay. Time Triggered Protocols. • Global time by fault tolerant clock synchronisation • Exact time point of a certain message is known ( determinism ) • Real time capable, for safety-critical systems

stephan
Download Presentation

TTP and FlexRay

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TTP and FlexRay

  2. Time Triggered Protocols • Global time by fault tolerant clock synchronisation • Exact time point of a certain message is known(determinism) • Real time capable, for safety-critical systems • Each node gets a time slot in the transmission loopwhere onlyit can send a message + No arbitration necessary + No address field – Less flexible

  3. Requirements • General • Higher bandwidth • Fault tolerance • Deterministic data transmission with guaranteed latency and minimal jitter. • Support for distributed systems • Unifications of bus systems within vehicles • Composability • Automotive • Configurable synchronous and asynchronous transmission • Support of scaleable redundancy. • Prompt error detection and error reporting. • Fault-containment at the level of the physical layer. • Media-access without arbitration. • Support for a fiber-optics and electrical physical layer. • Flexibility, expandability and easy configuration in automotive applications.

  4. TTP/C Security Composability Flexibility FlexRay Flexibility Composability Security Requirements (Priorities)

  5. System Structure • The CNI (implemented as a Dual Ported RAM) is an interface between the application layer and protocol layer of a TTP node. • The TTP/C protocol runs on the TTP/C communication controller • Applications run on the host subsystem.

  6. Topology Bus Star TTP/C FlexRay

  7. Nodes TTP FlexRay

  8. TTP Max data field length = 236B MEDL, TDMA round, Cluster cycle Event channel on top of TTP – specified no. of bytes in message reserved Event triggered protocol can be implemented at a higher level CNI continues to be defined in temporal domain Error correction possible Async traffic protected by BG FlexRay Max data field length = 12B Schedule determined at runtime Event channel in parallel – two recurring intervals (synchronous for high priority & asynchronous for low priority) Asynchronous messages controlled by Byteflight “minislotting” protocol Message Transmission • use TDMA strategy • protect communication channel with Bus Guardian

  9. Frames TTP FlexRay

  10. Fault Hypothesis (introduction) • Fault mode + No. of faults + Fault arrival rate • Level 1 and Level 2 faults. • FCU ’s and the partitioning strategy • Faults can affect – time,value and space • Hybrid Fault Model – manifest,symmetric and asymmetric faults. • Faults – active or passive • Self – checking pairs • Fail silence • Slightly Off Specification (SOS) Faults • Reconfiguration and Reconfiguration rate • Never Give Up (NGU) Strategy

  11. Fault Hypothesis (TTP/C) • Fault modes: • Arbitrary active faults in controllers and the hub of TTA-star • Arbitrary passive faults in the guardians and buses of TTA-bus • Spatial proximity faults that take out nodes and a hub in TTA-star • Maximum faults: TTA adopts a single-fault hypothesis. In more detail, the fault hypothesis of TTA assumes the following numbers of faults. • For TTA-bus: in each node either the controller or the bus guardian may fail (but not both). One of the buses may fail. To retain single fault tolerance, at least four controllers and their bus guardians must be nonfaulty, and both buses must be nonfaulty. Provided at least one bus is nonfaulty, the system may be able to continue operation with fewer nonfaulty components. • For TTA-star: to retain single fault tolerance, at least four controllers and both hubs must be nonfaulty. Provided at least one hub is nonfaulty, the system may be able to continue operation with fewer nonfaulty components. • Fault arrival rate: At most one fault every two rounds

  12. Fault Hypothesis (FlexRay)inferences • A node consisting of a microcontroller host, a communication controller, and two bus guardians will be fabricated on a single chip. It appears that all four components will use separate clock oscillators • Fault modes: • Asymmetric (and presumably, therefore, also arbitrary) faults in controllers for the purposes of clock synchronization • Fault modes for other services and components are not described • Spatial proximity faults may take out nodes and an entire hub • Maximum faults: • It appears that a single-fault hypothesis is intended: in each node, at most one bus guardian, or the controller, may be faulty. At most one of the interconnects may be faulty. • For clock synchronization, fewer than a third of the nodes may be faulty. • Fault arrival rate: The fault arrival rate hypothesis is not described.

  13. Clock Synchronisation • Throughput of the bus = tightness of bus schedule = quality of global clock synchronisation = quality of local oscillators + synchronisation algorithm • Two classes of synchronisation algorithm • Average based (eg. Welch-Lynch) • “fault tolerant midpoint” • assume n clocks and the maximum number of simultaneous faults to be tolerated is t (3t < n); the fault-tolerant midpoint is the average of the t + 1’st and n – t ‘ th when arranged from smallest to largest • Event based (eg. Srikant-Touleg) • Both averaging and event-based algorithmsrequire at least 3a + 1 nodes to tolerate a arbitrary faults.

  14. Clock Sychronisation (TTP/C) • Welch-Lynch algorithm for t = 1 • does not use dedicated wires , exploitsthe fact that communication is time triggered by a global schedule. • TTP nodes that have accurate clocks are marked with SYF(synchronisation frame) flag in the MEDL and time of these nodes are used for synchronisation. • Four registers per node used to maintain most recent accurate clock-difference readings • When the current slot has the synchronization field (CS) set in the MEDL, each node runs the synchronization algorithm using the four clock readings stored in its queue.(The largest and smallest discarded) • As the TTP algorithm is designed to tolerate one arbitrary (Byzantine) faultin every TDMA round, there must be at least four slots in every TDMA round withthe SYF flag set. • Group membership service is used to exclude nodes with very faulty clocks

  15. Clock Sychronisation (FlexRay) • Welch-Lynch algorithm • No membership service • No mechanism for detecting faulty nodes • No reconfiguration to exclude them • To tolerate two arbitrary faults at least • seven nodes (3t + 1) • five disjoint communication paths or three broadcast channels (2t + 1, and t + 1)

  16. TTP/C Share power supply and physical space with controller Synchronised by start of round signal from controller In TTA-Star BG is moved to hub FlexRay Two guardians per node hence greater cost Bus Guardian • is a separate FCU that has an independent copy of the schedule and knowledge of the global time • mediates message transmission by an interface to an interconnect • prevents ‘babbling idiot’ problem

  17. Startup and Restart • Failure of system must be detected by bus • Restart must be automatic and fast • Restart is initiated when an interface detects no activity on any bus line for someinterval – interface will then send the ‘wake-up’ signal • Components that detect faults in themselves or are notified of a fault perform local restart and self-test.

  18. TTP/C Use I-frames and C-State data in them If it does not receive one it transmits one itself on any one bus Problem of colliding restarts Problem of bad restarts. FlexRay Difficult to implement with incomplete schedule Difficult to initialise the Welch-Lynch algorithm if faults are present at startup and with no clique avoidance Self stabilising algorithms based on randomization?? Startup and Restart

  19. Services • Basic purpose of these architectures is to build reliable distributed application. • Basic services • clock synchronization • time-triggered activation • reliable message delivery • Fault tolerant replication • Approximate agreement • Exact agreement • the problem of distributing data consistently in presence of fault is variously called interactive consistency • Agreement: all nonfaulty receivers obtain the same message. • Validity: if the transmitter is nonfaulty, then nonfaulty receivers obtain the message actually sent.

  20. Services • Implementing interactive consistency • State machine approach (Majority voting) • Master/shadow • Compensation • Group membership service • Each node maintains a private membership list • Agreement: the membership lists of all nonfaulty nodes are the same. • Validity: the membership lists of all nonfaulty nodes contain all nonfaulty nodes and atmost one faulty node. • “Clique Avoidance” – maintain agreement, sacrifice validity

  21. TTP/C Membership service = Clique Avoidance + Implicit Acknowledgement FlexRay Only clock sychronisation and reliable message delivery Services

More Related