1 / 45

Embedded Systems: Testing

Embedded Systems: Testing. Combinational Logic. Main issues in testing embedded systems hardware: The world is ANALOG , not digital; even in designing combinational logic, we need to take analog effects into account Software doesn’t change but hardware does:

ismael
Download Presentation

Embedded Systems: Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Embedded Systems: Testing

  2. Combinational Logic

  3. Main issues in testing embedded systems hardware: The world is ANALOG, not digital; even in designing combinational logic, we need to take analog effects into account Software doesn’t change but hardware does: --different manufacturers of the same component --different batches from the same manufacturer --environmental effects --aging --noise main areas of concern: --signal levels (“0”, “1”—min, typical, and max values; fan-in, fan-out) --timing—rise & fall times; propagation delay; hazards and race conditions --how to deal with effects of unwanted resistance, capacitance, induction

  4. fig_02_39 Testing combinational circuits Fault-unsatisfactory condition or state; can be constant, intermittent, or transient; can be static or dynamic Error: static; inherent in system; most can be caught Failure: undesired dynamic event occurring at a specific time—typically random—often occurs from breakage or age; cannot all be designed away Physical faults: one-fault model Logical faults: Structural—from interconnections Functional: within a component

  5. fig_02_39 Structural faults: Stuck-at model: (a single fault model) s-a-1; s-a-0; may be because circuit is open or there is a short

  6. fig_02_39 Testing combinational circuits: s-a-0 fault fig_02_39

  7. fig_02_40 Modeling s-a-0 fault: fig_02_40

  8. fig_02_41 S-a-1 fault fig_02_41

  9. fig_02_42 Modeling s-a-1 fault: fig_02_42

  10. fig_02_43 Open circuit fault; appears as a s-a-0 fault fig_02_43

  11. fig_02_49 Functional faults: Example: hazards, race conditions Two possible methods: A: consider devices to be delay-free, add spike generator B: add delay elements on paths Method A Method B As frequencies increase, eliminating hazards through good design is even more important fig_02_49 fig_02_50

  12. The above examples refer to physical faults or performance faults These are less common in FPGAs. Our main concern here is with SPECIFICATION and DESIGN/IMPLEMENTTION faults—these are similar to software problems The main source of error is “human error” And software testing strategies are applicable

  13. Testing for: Storage Elements; Finite State Machines; Sequential Logic

  14. Johnson counter (2-bit): shift register + feedback input; often used in embedded applications; states for a Gray code; thus states can be decoded using combinational logic; there will not be any race conditions or hazards fig_03_30 fig_03_30, 03_31, 03_32, 03_33

  15. fig_03_34 3-stage Johnson counter: --Output is Gray sequence—no decoding spikes --not all 23 (2n) states are legal—period is 2n (here 2*3=6) --unused states are illegal; must prevent circuit from ever going into these states fig_03_34

  16. Making actual working circuits: Must consider --timing in latches and flip-flops --clock distribution --how to test sequential circuits (with n flip-flops, there are potentially 2n states, a large number; access to individual flipflops for testing must also be carefully planned)

  17. Timing in latches and flip-flops: Setup time: how long must inputs be present and stable before gate or clock changes state? Hold time: how long must input remain stable after the gate or clock has changed state? fig_03_36 fig_03_36, 03_37 Metastable oscillations can occur if timing is not correct Setup and hold times for a gated latch enabled by a logical 1 on the gate

  18. fig_03_38 Example: positive edge triggered FF; 50% point of each signal fig_03_38

  19. fig_03_39 Propagation delay: minimum, typical, maximum values--with respect to causative edge of clock: Latch: must also specify delay when gate is enabled: fig_03_39, 03-40

  20. Timing margins: example: increasing frequency for 2-stage Johnson counter –output from either FF is 00110011…. assume tPDLH = 5-16ns tPDLH =7-18ns tsu = 16ns fig_03_41 fig_03_41, 03_42

  21. Case 1: L to H transition of QA Clock period = tPDLH + tsu + slack0  tPDLH + tsu If tPDLH is max, Frequency Fmax = 1/ [5 + 16)* 10-9]sec = 48MHz If it is min, Fmax = 31.3 MHz Case 2: H to L transition: Similar calculations give Fmax = 43.5 MHz or 29.4 MHz Conclusion: Fmax cannot be larger than 29.4 MHz to get correct behavior

  22. Clocks and clock distribution: --frequency and frequency range --rise times and fall times --stability --precision

  23. fig_03_43 Clocks and clock distribution: Lower frequency than input; can use divider circuit above Higher frequncy: can use phase locked loop: fig_03_43

  24. fig_03_44 Selecting portion of clock: rate multiplier fig_03_44

  25. fig_03_46 Note: delays can accumulate fig_03_46

  26. fig_03_47 Clock design and distribution: Need precision Need to decide on number of phases Distribution: need to be careful about delays Example: H-tree / buffers fig_03_47

  27. fig_03_48 Testing: Scan path is basic tool fig_03_48

  28. fig_03_56 Testing fsms: Real-world fsms are weakly connected, i.e., we can’t get from any state S1 to any state S2 (but we could if we treat the transition diagram as an UNDIRECTED graph) Strongly connected: we can get from a state S initial to any state Sj; sequence of inputs which permits this is called a transfer sequence Homing sequence: produce a unique destination state after it is applied Inputs: I test = Ihoming + Itransfer Finding a fault: requires a Distinguishing sequence Example: Strongly connected Weakly connected fig_03_56

  29. fig_03_57 Basic testing setup: fig_03_57

  30. fig_03_58 fig_03_58

  31. fig_03_59 Example: machine specified by table below Successor tree fig_03_59

  32. fig_03_63 Example: recognize 1010 fig_03_63

  33. fig_03_65 Scan path fig_03_65

  34. fig_03_66 Standardized boundary scan architecture Architecture and unit under test fig_03_66

  35. Testing: • General Requirements • DFT • Multilevel Testing-- • System, Black Box, White Box Tests

  36. Testing--General Requirements • Testing--general requirements: • thorough • ongoing • DEVELOPED WITH DESIGN (DFT--design for test) • note: this implies that several LEVELS of testing will be carried out • efficient

  37. Good, Bad, and Successful Tests • good test: has a high probability of finding an error • ("bad test": not likely to find anything new) • successful test: finds a new error

  38. Most Effective Testing Is Independent most effective testing: by an "independent” third party Question: what does this imply about your team testing strategy for the quarter project?

  39. How Thoroughly Can We Test? how thoroughly can we test? example: VLSI chip 200 inputs 2000 flipflops (one-bit memory cells) # exhaustive tests? What is the overall time to test if we can do 1 test / msec? 1 test / msec? 1 test /nsec?

  40. Design for Testability (DFT)--what makes component "testable"? • operability: few bugs, incremental test • observability: you can see the results of the test • controllability: you can control state + input to test • decomposability: you can decompose into smaller problems and test each separately • simplicity: you choose the “simplest solution that will work” • stability: same test will give same results each time • understandability: you understand component, inputs, and outputs Design for Testability (DFT)

  41. Testing strategies testing strategies: verification--functions correctly implemented validation--we are implementing the correct functions (according to requirements)

  42. A general design/testing strategy can be described as a "spiral”: requirements  design  code system test module,integ. tests unit test (system) (black (white box) box) when is testing complete? One model: "logarithmic Poisson model” f(t)=(1/p)ln(I0pt+1) f(t) = cumulative expected failures at time t I0 = failures per time unit at beginning of testing p = reduction rate in failure intensity Spiral design/testing strategy Design/Module Tests Implement/Unit Tests Design/Integration Tests START END Requirements, Specs/System Tests

  43. Types of testing: • white box--"internals” (also called "glass box") • black box—modules and their "interfaces” (also called "behavioral") • system--”functionality” (can be based on specs, use cases) • (application-specific) Types of testing

  44. steps in good test strategy: • quantified requirements • test objectives explicit • user requirements clear • use "rapid cycle testing" • build self-testing software • filter errors by technical reviews • review test cases and strategy formally also • continually improve testing process Good testing strategy

  45. Black box testing guidelines General guidelines: test BOUNDARIES test output also choose "orthogonal” cases if possible

More Related