1 / 26

Lecture 16: Verification (Testing)

ECE 412: Microcomputer Laboratory. Lecture 16: Verification (Testing). Review Questions. What are the features of a spiral design methodology? Concurrent HW & SW development Parallel verification and synthesis Floorplanning and place-and-route included in the synthesis process

bevis
Download Presentation

Lecture 16: Verification (Testing)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 412: Microcomputer Laboratory Lecture 16: Verification (Testing) Lecture 16

  2. Review Questions • What are the features of a spiral design methodology? • Concurrent HW & SW development • Parallel verification and synthesis • Floorplanning and place-and-route included in the synthesis process • Modules developed only if a predesigned hard or soft macro is not available • Planned iteration throughout until converge • What are the three major advantages of a good abstraction? • Encapsulates a useful amount of complexity • Occurs at a “natural” point in the design • Promotes re-use Lecture 16

  3. Objectives • Provide you with insights for testing your MP’s, projects and other systems that you design. • Provide overview of different techniques and strategies used for design verification • Here, we use verification and testing interchangeably. • Testing is often times used to refer to the checks carried out on the post-fabrication silicon parts. Here we use “testing” to refer to the actual running of verification tests on the design before fabrication • Introduce some general knowledge about testing and testability Lecture 16

  4. Verification • Verification/Testing is one of the most expensive parts of chips • Verification accounts for > 50% of design effort for many chips • Debug time after fabrication has enormous opportunity cost • Shipping defective parts can sink a company • Example: Intel FDIV bug • Logic error not caught until > 1M units shipped • Recall cost $450M (!!!) Lecture 16

  5. Verification Concept • Verification is the process of designing & implementing the design validation environment and strategy, e.g. it entails • What modules to test • What specification to use • What functionality to test • What type of testing to perform (dynamic, static, random) • In industry, verification is usually carried out by a separate team of engineers that are not involved in the design process • Resource expensive: two or more verification engineers for every design engineer. • A verification plan is written and approved before starting its implementation Lecture 16

  6. Integration of Verification and Design • Design needs to take verification into account • Design is not complete until exhaustive or almost exhaustive testing is complete • Verification is a parallel process which starts almost at the same time as design • Design might need to be modified to allow exhaustive testing • Good design paradigms that enable good testing: • Well-defined module functionality and interfaces • Object-oriented design • Design respects abstraction barriers • Modules treated as units that provide a particular set of functions, not units that have a particular implementation • Ideally, implement individual module testing before testing the entire design • This is not always feasible (e.g. for complex pipelined designs with inputs that are fed back from later stages in the pipeline) Lecture 16

  7. Types of Testing • Dynamic vs. Static • i.e. simulation vs. formal methods • Deterministic vs. Random • Pre-specified vs. randomly generated sequence of inputs • Waveforms/logs checking vs. self-checking • Non-automatic vs. automatic testing • Black- vs. glass-box testing • Considering only specification vs. considering implementation in designing tests • Unit vs. integration testing • Module vs. System testing Lecture 16

  8. Dynamic vs. Static • Dynamic testing is based on providing test input vectors and using an event-based simulator to examine behavior and output values • Test is as good as the input vectors used • For complex designs with wide interfaces exhaustive testing of all input combinations might not be practical or feasible due to the total simulation time required • Handles big designs easier, as it only needs a set of inputs and the design. • Tools: Modelsim (Mentor), Ncsim(Cadence) etc • Static testing uses formal methods to prove correctness of design • An abstract model (set of properties) of the module is written and is compared against the design for all possible signal combinations • If a discrepancy is found, an error is signaled. • Test is as good as the provided abstract model. • Hard to use for entire design, usually used to verify smaller modules in the design • Tools: Formality (Synopsys), Axiom, ALDEC, … Lecture 16

  9. Deterministic vs. Random • Deterministic testing uses a user-specified sequence of test vectors. • Verification engineer builds testbench that drive the design inputs with specific sequence of test vectors • Random testing uses a seed to pseudo-randomly generate sequence of test vectors • Verification engineer builds testbench that uses a random seed to drive a pseudo-random sequence of test vectors to the design inputs. • Sequence of test-vectors has to be constrained to represent feasible input sequences • Seed is logged, so that test can be regenerated and debugged in case of unsuccessful simulation • Random testing is used when deterministic testing can not cover all possible input combinations. • Helps find design bugs for cases that were not considered by verification engineers or even for cases that do not seem sensible during normal operation but may happen in special cases (e.g. for processor verification: not meaningful but valid sequences of instructions) Lecture 16

  10. Waveforms/logs checking vs. self-checking • Waveform/log checking is useful for very small designs and small simulation periods • Usually used by designer to check functionality of design • Not very good for verification engineer • Self-checking, as the name implies, refers to the methods used to automate verification: • Assertions: Designer can include useful assertions in the design that check specific properties (in contrast with formal properties, assertions are only checked if the part of the design they belong to is tested by the input vectors) • VHDL has assertion statements included in the language • In Verilog, user can hand-craft assertions (System Verilog has also introduced assertions) • Testbench designer can add checks on the outputs or even on internal states of the design that will fire if an unexpected value is generated • Regressions usually include self-checking tests which do not generate waveforms. Thus, testing can be significantly sped up and only the failing tests need to be checked at the waveform level by engineers. Lecture 16

  11. Black-Box vs. Glass-Box • Black-Box testing checks whether a module/system meets its interface (timing and value) specifications • Key to good black-box testing is to be able to forget how a module is implemented • Implementing a module often creates assumptions about how it is supposed to work, which may differ from the specification • That’s why in industry verification is done by separate teams • Glass Box testing takes implementation into account • Goal here is to use information about how something is implemented to identify ways in which it might be broken • Exhaustive testing is rarely possible; knowing the implementation can help focus the test pattern generation • Ex: If you know how an adder is implemented, you can devise tests to show whether or not the overflow output is always set correctly • Ex: Memory leak in software implementations • These have to be written by either the module designer or someone familiar with the implementation Lecture 16

  12. Unit vs. integration • Unit testing checks each module in isolation before integrating modules together • Often easier to make sure small pieces of a system are working right than the entire thing at once • Smaller test vectors • Easier to debug • Unit testing is also good for catching “rare” bugs by using formal methods to test all possible sets of inputs. • No masking effect across modules • Integration testing is good for validating the inter-module communication and ensuring interface compatibility • Can sometimes think of integration testing as unit testing on the next level of the abstraction hierarchy • If you’ve done your unit testing right, you should be able to assume that modules work correctly, so any bugs that show up are in how the modules interface with each other • Interactions can also trigger subtle bugs within modules that may not have been exposed by module testing Lecture 16

  13. Automating Testing • Computers are much less easily bored than people. It’s worth the time to automate your tests. • One really good idea is to create a regression suite – a set of tests that you can run automatically and that generates a pass/fail result • This is a great way to help prevent team members from checking in code that then breaks everything • Large design projects do daily builds and regression tests Lecture 16

  14. Black- and Glass-Box Testing • Want to use both types of testing on any system • Black-box testing is good at identifying cases where the thing the designer implemented wasn’t what the specification wanted • Also a good way to do basic functionality testing • Glass-box testing is good at catching subtle errors in implementation • “fencepost” bugs at the edges of a data range • Initialization errors • Bounds-checking and out-of-range problems Lecture 16

  15. Manufacturing Test • A speck of dust on a wafer is sufficient to kill chip • Yield of any chip is < 100% • Must test chips after manufacturing before delivery to customers to only ship good parts • Manufacturing testers are very expensive • Minimize time on tester • Careful selection of test vectors Lecture 16

  16. Testing Your Chips • If you don’t have a multimillion dollar tester: • Build a breadboard with LED’s and switches • Hook up a logic analyzer and pattern generator • Or use a low-cost functional chip tester Lecture 16

  17. Test Board and TestosterICs • Test board • ZIF (zero-insertion-force) socket • Analog circuitry interface • Logic analyzer connector • Programmable power supplies • TestosterICs functional chip tester • Can be used in university environment • Reads IRSIM test vectors, applies them to chip, and reports assertion failures Lecture 16

  18. Stuck-At Faults • How does a chip fail? • Usually failures are shorts between two conductors or opens in a conductor • This can cause very complicated behavior • A simpler model: Stuck-At • Assume all failures cause nodes to be “stuck-at” 0 or 1, i.e. shorted to GND or VDD • Not quite true, but works well in practice Lecture 16

  19. Observability & Controllability • Observability: ease of observing a node by watching external output pins of the chip • Controllability: ease of forcing a node to 0 or 1 by driving input pins of the chip • Combinational logic is usually easy to observe and control • Finite state machines can be very difficult, requiring many cycles to enter desired state • Especially if state transition diagram is not known to the test engineer Lecture 16

  20. Test Pattern Generation • Manufacturing test ideally would check every node in the circuit to prove it is not stuck. • Apply the smallest sequence of test vectors necessary to prove each node is not stuck. • Good observability and controllability reduces number of test vectors required for manufacturing test. • Reduces the cost of testing • Motivates design-for-test Lecture 16

  21. Test Example SA1 SA0 • A3 {0110} {1110} • A2 {1010} {1110} • A1 {0100} {0110} • A0 {0110} {0111} • n1 {1110} {0110} • n2 {0110} {0100} • n3 {0101} {0110} • Y {0110} {1110} • Minimum set: {0100, 0101, 0110, 0111, 1010, 1110} Lecture 16

  22. Design for Test • Design the chip to increase observability and controllability • If each register could be observed and controlled, test problem reduces to testing combinational logic between registers. • Better yet, logic blocks could enter test mode where they generate test patterns and report the results automatically. Lecture 16

  23. Scan • Convert each flip-flop to a scan register • Only costs one extra multiplexer • Normal mode: flip-flops behave as usual • Scan mode: flip-flops behave as shift register • Contents of flops can be scanned out and new values scanned in Lecture 16

  24. Built-in Self-test • Built-in self-test lets blocks test themselves • Generate pseudo-random inputs to comb. logic • Combine outputs into a syndrome • With high probability, block is fault-free if it produces the expected syndrome Lecture 16

  25. Conclusions • Think about testing from the beginning • Simulate/verify as you go • Plan for test after fabrication (easy for FPGAs!) • BIST occupies extra chip area and also has delay overhead. However, its benefit is larger for design cost reduction • ATPG (automatic test pattern generation) is still an active research topic • “If you don’t test it, it won’t work! (Guaranteed)” Lecture 16

  26. Next Time • No class on 3/18 • Quiz 2: 4/1 Lecture 16

More Related