Download
benefits of bounded model checking at an industrial setting n.
Skip this Video
Loading SlideShow in 5 Seconds..
Benefits of Bounded Model Checking at an Industrial Setting PowerPoint Presentation
Download Presentation
Benefits of Bounded Model Checking at an Industrial Setting

Benefits of Bounded Model Checking at an Industrial Setting

153 Views Download Presentation
Download Presentation

Benefits of Bounded Model Checking at an Industrial Setting

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Benefits of Bounded Model Checking at an Industrial Setting F.Copty, L. Fix, R.Fraer, E.Giunchiglia*, G. Kamhi, A.Tacchella*, M.Y.Vardi** Intel Corp., Haifa, Israel *Università di Genova, Genova, Italy **Rice University, Houston (TX), USA

  2. Technical framework • Symbolic Model Checking (MC) • Over 10 years of successful application in formal verification of hardware and protocols • Traditionally based on reduced ordered Binary Decision Diagrams (BDDs) • Symbolic Bounded Model Checking (BMC) • Introduced recently, but shown to be extremely effective for falsification (bug hunting) • Based on propositional satisfiability (SAT) solvers

  3. Open points • Why is BMC effective? • Because the search is bounded, and/or... • ...because it uses SAT solvers instead of BDDs? • What is the impact of BMC on industrial-size verification test-cases? • Traditional measures: performance and capacity • A new perspective: productivity

  4. Our contribution • Apples-to-apples comparison • Expert’s tuning both on BDDs and SAT sides optimal setting for SAT by tuning search heuristics • BDD-based BMC vs. SAT-based BMC using SAT (rather than bounding) is a win • A new perspective of BMC on industrial test-cases • BMC performance and capacity  SAT capacity reaches far beyond BDDs • SAT-based BMC productivity greatercapacity + optimal setting = productivity boost

  5. Agenda • BMC techniques • Implementing BDD-based BMC • SAT-based BMC: algorithm, solver and strategies • Evaluating BMC at an industrial setting • BMC tools: Forecast (BDDs) and Thunder (SAT) • Measuring performance and capacity • In search of an optimal setting for Thunder and Forecast • Thunder vs. Forecast • Thunder capacity boost • Measuring productivity • Witnessed benefits of BMC

  6. Initial states Buggy states Counterexample trace BFS traversal

  7. From BDD-based MC to BMC Adapting state-of-the-art BDD techniques to BMC • Bounded prioritized traversal • When the BDD size reaches a certain threshold... • ... split the frontier into balanced partitions, and... • ... prioritize the partitions according to some criterion • Ensure bound is not exceeded • Bounded lazy traversal • Works backwards • Application of bounded cone of influence

  8. Sat SAT solver Unsat } Increase k? Bound (k=4) SAT-based BMC }

  9. SAT solvers • Input: a propositional formula F( x1, ..., xn ) • Output: a valuation v = v1, ..., vn withvi {0,1} s.t. F( v1, ..., vn ) = 1 • A program that can answer the question “there exists v s.t. F( v ) = 1” is a SAT solver • Focus on solving SAT • By exploring the space of possible assignments • Using a sound and complete method • Stålmarck’s (patented) • Davis-Logemann-Loveland (DLL)

  10. LA or SAT HR, LB or SAT LA or UNSAT DLL method s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-SOLVE(s) next  LA repeat casenextof LA : nextLOOK-AHEAD(s) LB : next LOOK-BACK(s) HR : next HEURISTIC(s) Untilnext { SAT, UNSAT } returnnext s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-SOLVE(s) next  LA repeat casenextof LA : next LOOK-AHEAD(s) LB : next LOOK-BACK(s) HR : nextHEURISTIC(s) Untilnext { SAT, UNSAT } returnnext s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-SOLVE(s) next  LA repeat casenextof LA : nextLOOK-AHEAD(s) LB : next LOOK-BACK(s) HR : next HEURISTIC(s) Untilnext { SAT, UNSAT } returnnext s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-SOLVE(s) next  LA repeat casenextof LA : next LOOK-AHEAD(s) LB : next LOOK-BACK(s) HR : nextHEURISTIC(s) Untilnext { SAT, UNSAT } returnnext s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-SOLVE(s) next  LA repeat casenextof LA : nextLOOK-AHEAD(s) LB : next LOOK-BACK(s) HR : next HEURISTIC(s) Untilnext { SAT, UNSAT } returnnext s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-SOLVE(s) next  LA repeat casenextof LA : next LOOK-AHEAD(s) LB : nextLOOK-BACK(s) HR : next HEURISTIC(s) Untilnext { SAT, UNSAT } returnnext s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-SOLVE(s) next  LA repeat casenextof LA : nextLOOK-AHEAD(s) LB : nextLOOK-BACK(s) HR : nextHEURISTIC(s) Untilnext { SAT, UNSAT } returnnext s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-SOLVE(s) • next  LA • repeat • casenextof • LA : next LOOK-AHEAD(s) • LB : next LOOK-BACK(s) • HR : next HEURISTIC(s) • Untilnext { SAT, UNSAT } • returnnext

  11. SIMO: a DLL-based SAT solver • Boolean Constraint Propagation (BCP) is the only Look-Ahead strategy • Non-chronological Look-Back • Backjumping (BJ): escapes trivially unsatisfiable subtrees • Learning: dynamically adds constraints to the formula • Search heuristics • Static: branching order is supplied by the user • Dynamic • Greedy heuristics: simplify as many clauses as possible • BCP-based: explore most constrained choices first • Independent (relevant) vs. dependent variables

  12. Selection Scoring Propagation Moms All All Morel Relevant Relevant Unit All All All Unirel All Relevant All Unirel2 Relevant Relevant All SIMO’s search heuristics

  13. Directives Property (ForSpec) Model (HDL) Model Checking Algorithms Proof/Counterexample Forecast: BDD-based (B)MC Forecast Spec Synthesis RTL synthesis Interface to BDD engines Intel’s BDD CUDD CAL …

  14. Directives Property (ForSpec) Model (HDL) + + Formula generation Proof/Counterexample Thunder: SAT-based BMC Spec Synthesis RTL synthesis Thunder Interface to SAT engines SIMO Prover SATO GRASP

  15. Performance and capacity • Performance (what resources?) • CPU time • Memory consumption • Capacity (what model size?) • BDD technology tops at 400 state variables (typically) • SAT technology has subtle limitations depending on: • The kind of property being checked • The length of the counterexample

  16. Measuring performance • Benchmarks to measure performance are • Focusing on safety properties • Challenging for BDD-based model checking • In the capacity range of BDD-based model checking • In more detail • A total 17 circuits coming from Intel’s internal selection with known counterexample minimal length k • Using 2 formulas per circuit with Thunder/SIMO flow • A satisfiable instance (falsification) at bound k, and • An unsatisfiable instance (verification) at bound k-1

  17. An optimal setting for Thunder • With BJ + learning enabled... • ... we tried different heuristics • Moms (M) and Morel (MR) • Unit (U), Unirel (UR) and Unirel2 (UR2) • SIMO admits a single optimal setting (UR2) • Faster on the instances solved by all the heuristics (16) • Solves all instances in less than 20 minutes of CPU time • Unirel2 is the default setting with the Thunder/SIMO flow

  18. Bounded traversal in Forecast • With automatically derived initial order • Bounded lazy (ABL) • Bounded prioritized (ABP) • Unbounded prioritized (AUP)  bounding does not yield consistent improvements! • With semi-automatically derived initial order • Bounded settings (SBL, SBP) • Unbounded prioritized (SUP) bounding does not yield consistent improvements!

  19. An optimal setting for Forecast? • Default setting is AUP • Best approximates the notion of default setting in Thunder • AUP is the the best among A’s • Tuned setting (ST) • Semi-automatic intial order • Specific combinations of: • Unbounded traversal • Prioritized traversal • Lazy strategy • Partitioning the trans. relation • No single optimal tuned setting for Forecast

  20. Thunder vs. Forecast • Forecast default AUP is worse than Thunder UR2 • Forecast tuned ST compares well with Thunder UR2 • Forecast ST time does not include: • Getting pruning directives • Finding a good initial order • Getting the best setting

  21. Measuring capacity • The capacity benchmark is derived from the performance benchmark • Getting rid of the pruning directives supplied by the experienced users • Enlarging the size of the model beyond the scope of BDD-based MC • Unpruned models for this analysis… • …have thousands sequential elements (up to 10k) • …are out of the capacity for Forecast

  22. Thunder capacity boost

  23. Measuring productivity • Productivity decreases with user intervention • Need to reduce the model size • Need to find a good order on state variables • Need to find a good tool setting • No user intervention  no productivity penalty • Using Thunder/SIMO BMC flow: • Dynamic search heuristic: no need for an initial order • Single optimal setting: Unirel2 (with BJ and learning) • Extended capacity: no manual pruning • Comparison with Forecast BMC flow indicates that SAT(rather than bounding) is the key for better productivity

  24. Witnessed benefits of BMC • A single optimal setting found for Thunder using SIMO: Unirel2 with backjumping and learning • SAT (rather than bounding) turns out to be the key benefit when using BMC technology • A complete evaluation • Performance of tuned BDDs parallels SAT • Impressive capacity of SAT vs. BDDs • SAT wins from the productivity standpoint

  25. Useful links • The version of the paper with the correct numbers in the capacity benchmarks:www.cs.rice.edu/~vardi www.cs.rice.edu/~tac • More information about SIMO: www.cs.rice.edu/CS/Verification www.mrg.dist.unige.it/star