1 / 31

Reliable Verification Using Symbolic Simulation with Scalar Values

Reliable Verification Using Symbolic Simulation with Scalar Values. Chris Wilson and David L. Dill Computer Systems Laboratory Stanford University June, 2000. Bug rate. Directed testing Random testing. Many “easy”. fewer “hard”. tapeout. “purgatory”. time. Verification Bottleneck.

lieu
Download Presentation

Reliable Verification Using Symbolic Simulation with Scalar Values

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliable Verification Using Symbolic Simulation with Scalar Values Chris Wilson and David L. Dill Computer Systems Laboratory Stanford University June, 2000

  2. Bug rate Directed testing Random testing Many “easy” fewer “hard” tapeout “purgatory” time Verification Bottleneck

  3. Directed testing random testing Model checking emulation semi-formal methods Current Approach Bug rate time

  4. Symbolic simulation Our Approach Bug rate Key issue: Reliability! time

  5. Reliability • Definition: • Always gives some coverage when resource limits encountered. • Gives coverage proportional to effort. • Ease of use • predictable coverage • useful feedback • easy to debug

  6. Efficiency • Efficiency = Coverage/Unit Effort • Coverage • specified functionality • “input space” • Effort • manpower • computer resource usage • schedule

  7. % of bugs found Model Checking Random testing Emulation Directed testing Reliability vs. Efficiency Efficiency Reliability

  8. Goal • Have the reliability, ease of use of directed testing. • AND… • efficiency equal or greater than random testing.

  9. Model Checking Random testing Emulation Directed testing Reliability vs. Efficiency Target area Efficiency Reliability

  10. <d1,d2,d3,d4> 23 pass/ fail datain = dataout <a1,a2,a3,a4> 408 address 0 interrupt 5 <c1,c2,c3> req valid dly =0 counter request DUT “read” 0 1 “write” 0 Symbolic Simulation Symbolic test = directed test with symbolic values

  11. Symbolic Simulation • Efficiency • 1 symbolic test <=> many directed tests. • Ease of use • short tests => easy to write, debug. • Blow up? • BDDs too unpredictable. How to prevent blow up?

  12. Quasi-symbolic simulation • Symbolic simulation externally • scalar values internally • simulation run requires constant memory. • Key ideas • Don’t compute exact value unless necessary. • many don’t cares in large designs. • Trade time for memory. • Multiple runs to generate exact values.

  13. Obeys law of excluded middle! Symbolic variable 0 b a -a a c X X X X X X X X X 0 b -a a c a Don’t care variables Conservative approximation “traditional” X value Don’t care logic Basic Algorithm & & & &

  14. b a & X X X X X b a O O Decision Procedure

  15. case split evaluate a=1 a=0 unit propagate X X b=1 b=0 b=1 0 0 0 Davis-Putnam Algorithm • Tree Search… • Davis, Logemann, Loveland [DPLL62]. X

  16. X 0 1 b 0 X b b a & b a=1 X X X X X X a=0 b b a 0 0 ? & 0 0 0 O Decision Procedure ? Test is Unsatisfiable! Variable selection heuristic: pick relevant variable by propagating from inputs.

  17. Reactivity • Reactive Test • test behavior depends on circuit. • Most tests require reactivity • since goal is to find all bugs… • must support reactivity efficiently.

  18. Reactivity example Set ‘request’ = READ; Set ‘reqv’ = “1”; wait for ‘ack’; check that ‘data’ = expected_data; stop;

  19. Reactivity example Set ‘request’ = READ; Set ‘reqv’ = “1”; wait for ‘ack’; check that ‘data’ = expected_data; stop; What if ‘ack’ = “X”?

  20. Virtual thread Wait Statement Set ‘request’ = READ; Set ‘reqv’ = “1”; wait for ‘ack’; ‘ack’ == T = “X” ‘ack’ == F = “X” check that ‘data’ = expected_data; stop; wait for ‘ack’ == “1”;

  21. Cycle 2 ‘ack’ == F = “X” wait for ‘ack’; ‘ack’ == T = “X” ‘ack’ == F = “X” check that ‘data’ = expected_data; stop; wait for ‘ack’;

  22. Stopping Guard = “X” check that ‘data’ = expected_data; stop; Stop? or not stop?

  23. Stopping • Modify Davis-Putnam... • if guard condition = “X” when stopped… • prove that test can really stop in this cycle. • Case split on guard condition. • case split on fail/pass condition only if stop = “1”.

  24. Stopping • Modify Davis-Putnam... • if guard condition = “X” when stopped… • prove that test can really stop in this cycle. • Case split on guard condition. • Do not allow unit propagation. • case split on fail/pass condition only if stop = “1”. • Unit propagation is allowed. Disallowing unit propagation allows method to be complete.

  25. Related Work • BDD-based Symbolic Simulation • STE [BryantSeger95], Innologic. • Sequential ATPG • SAT/ATPG-based Model Checking • BMC [Biere99], [Boppana99] • Other SAT-based Semi-Formal Methods • [Ganai99]

  26. Experiments • Show that quasi-symbolic simulation can find bugs. • Test case bugs do not cause bottlenecks. • Demonstrate graceful degradation • get good coverage if simulation time limit hit.

  27. Experiment 1 • Write/debug testcase for “hard” bug. • 140K gate industrial design. • Not found in simulation or bringup! • Four possible results • SAT - test case error. • TIMEOUT - test case error (device timeout.) • UNSAT - no bug found. • BUG - bug found.

  28. Experiment 1 cases evals time(sec.) SAT 19 3.8 31.4 TIMEOUT 22 1.6 49.0 UNSAT 9 52.3 445.9 BUG 1 78 863.0

  29. Highest covered sub-node Time limit hit! Experiment 2

  30. Experiment 2 Maximum tree size Number of dependent variables in the test

  31. Conclusions • Want to find allbugs faster. • Reliability is key. • Use quasi-symbolic simulation • has the efficiency of random testing. • And reliability of directed testing. • Experiments show it can be used as primary verification method.

More Related