1 / 30

On Solving Presburger and Linear Arithmetic with SAT

On Solving Presburger and Linear Arithmetic with SAT. Ofer Strichman Carnegie Mellon University. Quantifier-free Presburger formulas are rational constants. The decision problem. A Boolean combination of predicates of the form Disjunctive linear arithmetic are constants.

judyb
Download Presentation

On Solving Presburger and Linear Arithmetic with SAT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On Solving Presburger and Linear Arithmetic with SAT Ofer Strichman Carnegie Mellon University

  2. Quantifier-free Presburger formulas • are rational constants The decision problem • A Boolean combination of predicates of the form • Disjunctive linear arithmetic • are constants

  3. Some Known Techniques • Linear Arithmetic (conjunctions only) • Interior point method (Khachian 1979, Karmarkar 1984) (P) • Simplex (Dantzig, 1949) (EXP) • Fourier-Motzkin elimination (2EXP) • Loop residue (Shostak 1984) (2EXP) • … Almost all theorem provers use Fourier-Motzkin elimination (PVS, ICS, SVC, IMPS, …)

  4. Eliminatex1 Eliminatex2 Eliminatex3 Fourier-Motzkin elimination - example Elimination order: x1, x2, x3 (1) x1 – x2· 0 (2) x1 – x3· 0 (3) -x1 + 2x3 + x2· 0 (4) -x3· -1 (5) 2x3· 0(from 1 and 3) (6) x2 + x3· 0 (from 2 and 3) (7) 0 · -1 (from 4 and 5) Contradiction (the system is unsatisfiable)!

  5. A system of conjoined linear inequalities Fourier-Motzkin elimination (1/2) m constraints n variables

  6. Fourier-Motzkin elimination (2/2) Eliminating xn • Sort constraints: • For all i s.t. ai,n> 0 • For all i s.t. ai,n< 0 • For all I s.t. ai,n= 0 m1 m2 • Generate a constraint from each pair in the first two sets. Each elimination adds (m1¢ m2 – m1 – m2) constraints

  7. Complexity of Fourier-Motzkin • Worst-case complexity: • So why is it so popular in verification? • Because it is efficient for small problems. • In verification, most inequalities systems are small. • In verification we typically solve a large number of small linear inequalities systems. • The bottleneck: case splitting. • Q: Is there an alternative to case-splitting ?

  8. Boolean Fourier-Motzkin (BFM) (1/2) • Normalize formula: • Transform to NNF • Eliminate negations by reversing inequality signs (x1–x2 > 0)  x1–x3· 0  (-x1 + 2x3 + x2 > 0  1 > x3 ) x1–x2· 0  x1–x3· 0  (-x1 + 2x3 + x2 · 0  -x3· -1)

  9. e1 e3 e5 x1 – x2· 0 -x1 + 2x3 + x2·0 2x3 ·0 e1 e3  e5 Boolean Fourier-Motzkin (BFM) (2/2)  : x1 - x2· 0  x1 - x3· 0  (-x1 + 2x3 + x2 · 0  -x3· -1) ’: e1  e2  ( e3  e4 ) 2.Encode: 3. Perform FM on the conjunction of all predicates: Add new constraints to ’

  10. e1e3e5 e5 2x3· 0 e6x2 + x3· 0 e2e3e6 False 0 · -1 e4e5false BFM: example e1x1 – x2· 0 e2x1 – x3· 0 e3 -x1 + 2x3 + x2· 0 e4 -x3· -1 e1  e2  (e3  e4) ’ is satisfiable

  11. Case splitting x1 < x2 – 3  x2 < x3 –1 x1 < x2 – 3  x3 < x1 +1 No constraints No constraints x1 < x2 – 3  x2 < x3 – 1  x3 < x1 +1 ... constraints Problem: redundant constraints : (x1 < x2 – 3  (x2 < x3 –1 x3 < x1 +1))

  12. Solution: Conjunctions Matrices (1/3) • Letdbe the DNF representation of • We only need to consider pairs of constraints that are in one of the clauses ofd • Deriving dis exponential. But – • Knowing whether a given set of constraints share a clause indis polynomial, usingConjunctions Matrices

  13. l0 l1 l2 l3  :l0 (l1(l2  l3)) 1 1 1 l0 l1 l2 l3 1 0 0  M: 1 0 1 l0  1 0 1  l1 Conjunctions Matrix l2 l3 Conjunctions Matrices (2/3) • Let be a formula in NNF. • Letliandljbe two literals in. • Thejoining operandofliandljis the lowest joint parent of liandljinthe parse tree of.

  14. Conjunctions Matrices (3/3) • Claim 1: A set of literals L={l0,l1…ln}  share a clause in dif and only if for allli,lj L, ij, M[li,lj] =1. • We can now consider only pairs of constraints that their corresponding entry in Mis equal to 1

  15. e1 e2 e3 e4 e1 e2 e3 e4 1 1 1 1 1 0 e1e3e5 e5 2x3· 0 e6 x2 + x3· 0 e1 e2 e3 e4 e5 e6 e2e3e6 e1 e2 e3 e4 e5 e6 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 BFM: example e1x1 – x2· 0 e2x1 – x3· 0 e3 -x1 + 2x3 + x2· 0 e4 -x3· -1 e1  e2  (e3  e4) Saved a constraint from e4 ande5

  16. Theoretically, there can still be constraints. Complexity of the reduction • Let c1 denote the number of generated constraints with BFM combined with conjunctions matrices. • Claim 3: Typically, c1 << c2 The Reason: • In DNF, the same pair of constraints can appear many times. • With BFM, it will only be solved once. • Let c2 denote the total number of constraints generated with case-splitting. • Claim 2: c1 · c2 .

  17. Claim 4: Complexity of solving the resulting SAT instance is bounded by where m is the number of predicates in  Overallcomplexity: Reduction SAT Complexity of solving the SAT instance The reason is: • All the clauses that we add are Horn clauses. • Therefore, for a given assignment to the original encoding of , all the constraints are implied in linear time.

  18. With case-splitting only the 10x10 instance could be solved (~600 sec.) Experimental results (1/2) Reduction time of ‘2-CNF style’ random instances. • Solving the instances with Chaff – a few seconds each.

  19. Experimental results (2/2) • Seven Hardware designs with equalities and inequalities • All seven solved with BFM in a few seconds • Five solved with ICS in a few seconds. The other two could not be solved. On the other hand… • Standard ICS benchmarks (A conjunction of inequalities) • Some could not be solved with BFM • …while ICS solves all of them in a few seconds. The reason (?): ICS has a more efficient implementation of Fourier-Motzkin compared to PORTA

  20. Some Known Techniques • Quantifier-free Presburger formulas • Branch and Bound • SUP-INF (Bledsoe 1974) • Omega Test (Pugh 1991) • …

  21. y x Quantifier-free Presburger formulas • Classical Fourier-Motzkin method finds real solutions • Geometrically, a system of real inequalities define a convex polyhedron. • Each elimination step projects the data to a lower dimension. • Geometrically, this means it finds the ‘shadow’ of the polyhedron.

  22. y x The Omega Test (1/3)Pugh (1993) • The shadow of constraints over integers is not convex. • Satisfiability of the real shadow does not imply satisfiability of the higher dimension. • A partial solution: Consider only the areas above which the system is at least one unit ‘thick’. This is the dark shadow. • If there is an integral point in the dark shadow, there is also an integral point above it.

  23. Splinters The Omega test (2/3)Pugh (1993) • If there is no solution to the real shadow –  is unsatisfiable. • If there is an integral solution to the dark shadow –  issatisfiable. • Otherwise (‘the omega nightmare’) – check a small set of planes (‘splinters’).

  24. Output: C’ Ç 9 integer xn. S • C’ is the dark shadow (a formula without xn) • S contains the splinters The output formula does not contain xn The Omega test (3/3)Pugh (1993) In each elimination step: • Input: 9xn. C • xn is an integer variable • C is a conjunction of inequalities

  25. inequality #1 inequality #2 inequality #3 Ç inequality #4 e1 e2 e3Çe4 e1 Æ e2 ! e3Çe4 Add new constraints to ’ Boolean Omega Test • Normalize (eliminate all negations) • Encode each predicate with a Boolean variable • Solve the conjoined list of constraints with the Omega-test:

  26. Related work A reduction to SAT is not the only way …

  27. The CVC approach(Stump, Barrett, Dill. CAV2002) • Encode each predicate with a Boolean variable. • Solve SAT instance. • Check if assignments to encoded predicates is consistent (using e.g. Fourier-Motzkin). • If consistent – return SAT. • Otherwise – backtrack.

  28. x1 – x3 < 0 x2 -x3 0 x2-x1 <0 1 0 Difference Decision Diagrams (Møller, Lichtenberg, Andersen, Hulgaard, 1999) • Similar to OBDDs, but the nodes are ‘separation predicates’ • Each path is checked for consistency, using ‘Bellman-Ford’ • Worst case – an exponential no. of such paths ‘Path – reduce’ 1 • Can be easily adapted to disjunctive linear arithmetic

  29. Finite domain instantiation • Disjunctive linear arithmetic and its sub-theories enjoy the ‘small model property’. • A known sufficient domain for equality logic: 1..n(where n is the number of variables). • For this logic, it is possible to compute a significantly smaller domain for each variable (Pnueli et al., 1999). • The algorithm is a graph-based analysis of the formula structure. • Potentially can be extended to linear arithmetic.

  30. Range of all var’s: 1..11 State-space: 1111 Instead of giving the range [1..11], analyze connectivity: x1 x2 y1 y2 g1 g2 x1, y1, x2, y2:{0-1} u1, f1, f2, u2 : {0-3} g1, g2, z : {0-2} u1 f1 f2 u2 z State-space: ~105 Reduction to SAT is not the only way… Further analysis will result in a state-space of 4 Q: Can this approach be extended to Linear Arithmetic?

More Related