1 / 48

SAT Based Abstraction/Refinement in Model-Checking

SAT Based Abstraction/Refinement in Model-Checking. Ofer Strichman*. Joint work with E. Clarke* A. Gupta * J. Kukula** *Carnegie Mellon University **Synopsys. I. Model Checking. Add reachable states until reaching a fixed-point. Model Checking. I. Too many states to handle !. h.

questa
Download Presentation

SAT Based Abstraction/Refinement in Model-Checking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SAT Based Abstraction/Refinement in Model-Checking Ofer Strichman* Joint work with E. Clarke* A. Gupta* J. Kukula** *Carnegie Mellon University **Synopsys

  2. I Model Checking Add reachable states until reaching a fixed-point

  3. Model Checking I Too many states to handle !

  4. h h h h h Abstraction S S’ Abstraction Function h : S ! S’

  5. Abstraction Function • Partition variables into visible(V) and invisible(I) variables. The abstract model consists of V variables. I variables are made inputs. The abstraction function maps each state to its projection over V.

  6. Abstraction Function x1 x2 x3 x4 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 x1 x2 h 0 0 Group concrete states with identical visible part to a single abstract state.

  7. x1 x2 x3 x4 x1 x2 Abstract i1 i2 i1 i2 x3 x4 Building Abstract Model M’ can be computed efficiently if M is in functional form, e.g. sequential circuits.

  8. Existential Abstraction I I

  9. Converse does not hold Model Checking Abstract Model • Preservation Theorem The counterexample may be spurious

  10. Frontier P Visible Invisible Inputs Current trends… (1/3) Localization (R. Kurshan, 80’s)

  11. Abstraction-Refinement Loop Abstract Model Check M, p, h M’, p Pass No Bug Fail h’ Refine Check Counterexample Spurious Real Bug

  12. Deadend states I I Bad States Why spurious counterexample? f Failure State

  13. Refinement • Problem: Deadend and Bad States are in the same abstract state. • Solution: Refine abstraction function. • The sets of Deadend and Bad states should be separated into different abstract states.

  14. h’ Refinement h’ h’ h’ h’ h’ h’ Refinement : h’

  15. Abstraction Abstract Model Check M, p, h M’, p Pass No Bug Fail h’ Refine Check Counterexample Spurious Real Bug

  16. Checking the Counterexample Abstract Model Check M, p, h M’, p Pass No Bug Fail h’ Refine Check Counterexample Spurious Real Bug

  17. Checking the Counterexample • Counterexample : (c1, …,cm) • Each ci is an assignment to V. Simulate the counterexample on the concrete model.

  18. Checking the Counterexample Concrete traces corresponding to the counterexample: (Initial State) (Unrolled Transition Relation) (Restriction of V to Counterexample)

  19. Refinement Abstract Model Check M, p, h M’, p Pass No Bug Fail h’ Refine Check Counterexample Spurious Real Bug

  20. Deadend States Refinement

  21. Deadend States Bad States Refinement

  22. 1 0 1 0 0 0 1 0 0 1 0 0 1 1 1 0 1 0 1 1 Refinement as Separation d1 0 1 0 0 1 0 1 I b1 V b2

  23. 0 1 0 1 0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 1 0 1 0 1 Refinement as Separation Refinement : Find subset U of I that separates between all pairs of deadend and bad states. Make them visible. Keep U small ! d1 I b1 V b2

  24. Refinement as Separation The state separation problem Input: Sets D, B Output: Minimal U  I s.t.:  d D,  b B, u U. d(u)  b(u) The refinement h’ is obtained by adding Uto V.

  25. Two separation methods • ILP-based separation • Minimal separating set. • Computationally expensive. • Decision Tree Learning based separation. • Not optimal. • Polynomial.

  26. Separation with ILP (Example)

  27. Separation with ILP • One constraint per pair of states. • vi = 1 iff vi is in the separating set.

  28. DB v1 0 1 {d1,b2} {d2,b1} v2 v4 0 1 0 1 B D D B b2 d1 d2 b1 Decision Tree learning (Example) D B Classification: Separating Set : {v1,v2,v4}

  29. a1 0 1 a2 a5 0 1 0 1 c0 c1 c1 c2 Decision Tree Learning • Input : Set of examples • Each example is an assignment of values to the attributes. • Each example has a classification. • Output : Decision Tree • Each internal node is a test on an attribute. • Each leaf corresponds to a classification.

  30. Separation using Decision Tree Learning • Attributes : Invisible variables I • Classifications :‘D’ and ‘B’ • Example Set :DeadendBad Separating Set : The variables on the nodes of the decision tree.

  31. Refinement as Learning • For systems of realistic size • Not possible to generate D and B. • Expensive to separate D and B. • Solution: • Sample D and B • Infer separating variables from the samples. • The method is still complete: • counterexample will eventually be eliminated.

  32. d b Efficient Sampling D B • Let (D,B) be the smallest separating set of D and B. • Q: Can we find it without deriving D and B ? • A: Search for smallest d,b such that (d,b) = (D,B)

  33. Efficient Sampling • Direct search towards samples that contain more information. • How? Find samples not separated by the current separating set (Sep).

  34. Rename all viBto vi’ Efficient Sampling • Recall: • D characterizes the deadend states • B characterizesthe bad states • D B is unsatisfiable • Samples that agree on the sep variables:

  35. Efficient Sampling Run SAT solver on W(Sep) Sep = {} d,b = {} STOP unsat sat Add samples to d and b Compute Sep:=(d,b) Sep is the minimal separating set of D and B

  36. The Tool Sep MC LpSolve NuSMV Cadence SMV Dec Tree SAT Chaff

  37. Results Property 1

  38. Results Property 2 Efficient Sampling together with Decision Tree Learning performs best. Machine Learning techniques are useful in computing good refinements.

  39. (Barner, Geist, Gringauze, CAV’02) • Check counterexample incrementally (‘layering’). • Find small set of variables in Sf for which it is impossible to find an assignment consistent with the counterexample. Frontier P Visible Invisible Inputs Current trends… (1/3) Localization (Originally: R. Kurshan, 80’s)

  40. Current trends… (2/3) Intel’s refinement heuristic (Glusman et al., 2002) • Generate all counterexamples. • Prioritize variables according to their consistency in the counterexamples. X1 x2 x3 x4

  41. Current trends… (3/3) Abstraction/refinement with conflict analysis (Chauhan, Clarke, Kukula, Sapra, Veith, Wang, FMCAD 2002) • Simulate counterexample on concrete model with SAT • If the instance is unsatisfiable, analyze conflict • Make visible one of the variables in the clauses that lead to the conflict

  42. Current trends… (3/3) Remove clauses gradually, until instance becomes satisfiable. Choose invisible variables from the removed set.

  43. Current trends (3/3) Abstraction/refinement with conflict analysis (Chauhan, Clarke, Kukula, Sapra, Veith, Wang, FMCAD 2002)

  44. Future Work • Currently: Sometimes we find too many equally ‘good’ refinements to choose from. • We need more criteria for a good refinement (not just # latches). • Number of gates, number of clauses • Distance from property • Fan-in degree

  45. T T T T ‘ T ‘ T ‘ Future work Currently we restart with a refined transition relation

  46. T ’ T ’ T T T Future work A different approach: restart from the previous state. • An abstraction/refinement backtrack algorithm • What intermediate BDD’s should we save ? • How can BDDs be altered rather than recomputed ?

  47. The End

  48. Initialize SAT solver with ( or ) “enough samples”/ unsatisfiable Execute Sat Solver STOP satisfiable sample Add clause negating assignment to I in failure state Generating Samples

More Related