1 / 24

Using Problem Structure for Efficient Clause Learning

Using Problem Structure for Efficient Clause Learning. Ashish Sabharwal , Paul Beame, Henry Kautz University of Washington, Seattle April 23, 2003. The SAT Approach. CNF encoding f. SAT solver. Input p 2 D. p : Instance D : Domain graph problem,

flo
Download Presentation

Using Problem Structure for Efficient Clause Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Problem Structure for Efficient Clause Learning Ashish Sabharwal, Paul Beame, Henry Kautz University of Washington, Seattle April 23, 2003

  2. The SAT Approach CNF encoding f SAT solver Input p2D p : Instance D : Domaingraph problem, AI planning, model checking f SAT f SAT p bad p good University of Washington

  3. Key Facts • Problem instances typically have structure • Graphs, precedence relations, cause and effects • Translation to CNF flattens this structure • Best complete SAT solvers are • DPLL based clause learners; branch and backtrack • Critical: Variable order used for branching University of Washington

  4. Natural Questions • Can we extract structure efficiently? • In translation to CNF formula itself • From CNF formula • From higher level description • How can we exploit this auxiliary information? • Tweak SAT solver for each domain • Tweak SAT solver to use general “guidance” University of Washington

  5. Our Approach CNF encoding f Branching sequence SAT solver Input p2D f SAT f SAT Encode “structure” as branching sequence p bad p good University of Washington

  6. Related Work • Exploiting structure in CNF formula • [GMT’02] Dependent variables • [OGMS’02] LSAT (blocked/redundant clauses) • [B’01] Binary clauses • [AM’00] Partition-based reasoning • Exploiting domain knowledge • [S’00] Model checking • [KS’96] Planning (cause vars / effect vars) University of Washington

  7. Our Result, Informally • Structure can be efficiently retrieved from highlevel description (pebbling graph) • Branching sequence as auxiliary information can be easily exploited Given a pebbling graphG, can efficiently generate a branching sequenceBG that dramatically improves the performance of current best SAT solvers on fG. University of Washington

  8. Preliminaries: CNF Formula Conjunction of clauses f = (x1ORx2OR:x9) AND (:x3ORx9)AND (:x1OR:x4OR:x5OR:x6) University of Washington

  9. Preliminaries: DPLL DPLL(CNF formula f) { Simplify(f); If (conflict) return UNSAT; If (all-vars-assigned) {return SAT assignment; exit} Pick unassigned variable x; Try DPLL(f |x=0), DPLL(f |x=1) } University of Washington

  10. Prelim: Clause Learning • DPLL: Change “if (conflict) return UNSAT”to “if (conflict) {learn conflict clause; return UNSAT}” x2 = 1, x3 = 0, x6 = 0 ) conflict “Learn” (:x2ORx3ORx6) University of Washington

  11. Prelim: Branching Sequence • B = (x1, x4, :x3, x1, :x8, :x2,:x4, x7, :x1, x2) • DPLL: Change “Pick unassigned var x”to “Pick next literal xfrom B; delete it from B; if x already assigned, repeat” • How “good” is B? • Depends on backtracking process, learning scheme Different from “branching order” University of Washington

  12. Prelim: Pebbling Formulas Node E is pebbled if(e1ORe2) = 1 fG = Pebbling(G) Source axioms:A, B, C are pebbled Pebbling axioms: A and B are pebbled)E is pebbled … Target axioms: T is not pebbled Target(s) (t1ORt2) T (e1ORe2) E F (f1) A B C (c1ORc2ORc3) (a1ORa2) (b1ORb2) Sources University of Washington

  13. Prelim: Pebbling Formulas • Can have • Multiple targets • Unbounded fanin • Large clause labels • Pebbling(G) is unsatisfiable • Removing any clause from subgraph of each target makes it satisfiable University of Washington

  14. Grid vs. Randomized Pebbling (n1 n2) m1 (t1 t2) l1 (h1 h2) (h1 h2) (i1 i2) e1 (i1 i2  i3  i4) f1 (e1 e2) (f1 f2) (g1 g2) (d1 d2  d3) (g1 g2) (a1 a2) (b1 b2) (c1 c2) (d1 d2) (a1 a2) (c1 c2  c3) b1 University of Washington

  15. Why Pebbling? • Practically useful • precedence relations in tasks, fault propagation in circuits, restricted planning problems • Theoretically interesting • Used earlier for separating proof complexity classes • “Easy” to analyze • Hard for current best SAT solvers like zChaff • Shown by our experiments University of Washington

  16. Our Result, Again • Efficient : Q(|fG|) • zChaff : One of the current best SAT solvers Given a pebbling graphG, can efficiently generate a branching sequenceBG such that zChaff(fG, BG) is empirically exponentially faster than zChaff(fG). University of Washington

  17. The Algorithm • Input: • Pebbling graphG • Output: • Branching sequenceBG, |BG| = Q(|fG|), that works well for 1UIP learning scheme and fast backtracking[fG : CNF encoding of pebbling(G)] University of Washington

  18. The Algorithm: GenSeq(G) • Compute node heights • Foreach u2 {unit clause labeled nodes} bottom up • Add u to G.sources • GenSubseq(u) • Foreach t2 {targets} bottom up • GenSubseq(t) University of Washington

  19. The Algorithm: GenSubseq(v) // trivial wrapper • If (|v.preds| > 0) • GenSubseq(v, |v.preds|) University of Washington

  20. The Algorithm: GenSubseq(v, i) • u = v.preds[i] // by increasing height • if i=1 // lowest pred • GenSubseq(u) if unvisited non-source • return • Output u.labels // higher pred • GenSubseq(u) if unvisitedHigh non-source • GenSubseq(v, i-1) // recurse on i-1 • GenPattern(u, v, i-1) // repetitive pattern University of Washington

  21. Results: Grid Pebbling • Pure DPLL upto 60 variables • DPLL + upto 60 variablesbranching seq • Clause learning upto 4,000 variables(original zChaff) • Clause learning upto 2,000,000 variables+ branching seq University of Washington

  22. Results: Randomized Pebl. • Pure DPLL upto 35 variables • DPLL + upto 50 variablesbranching seq • Clause learning upto 350 variables(original zChaff) • Clause learning upto 1,000,000 variables+ branching seq University of Washington

  23. Summary • High level problem description is useful • Domain knowledge can help SAT solvers • Branching sequence • One good way to encode structure • Pebbling problems: Proof of concept • Can efficiently generate good branching sequence • Structure use improves performance dramatically University of Washington

  24. Open Problems • Other domains? • STRIPS planning problems (layered structure) • Bounded model checking • Variable ordering strategies from BDDs? • Other ways of exploiting structure? • branching “order” • something to guide learning? • Domain-based tweaking of SAT algorithms University of Washington

More Related