1 / 22

Backdoors in the Context of Learning (short paper)

Backdoors in the Context of Learning (short paper). Bistra Dilkina, Carla P. Gomes, Ashish Sabharwal Cornell University SAT-09 Conference Swansea, U.K., June 30, 2009. SAT: Gap between theory & practice. Boolean Satisfiability or SAT :

kumiko
Download Presentation

Backdoors in the Context of Learning (short paper)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Backdoors in the Context of Learning(short paper) Bistra Dilkina, Carla P. Gomes, Ashish Sabharwal Cornell University SAT-09 Conference Swansea, U.K., June 30, 2009

  2. SAT: Gap between theory & practice • Boolean Satisfiability or SAT : • Given a Boolean formula F in conjunctive normal forme.g. F = (a or b) and (¬a or ¬c or d) and (b or c)determine whether F is satisfiable • NP-complete [note: “worst-case” notion] • widely used in practice, e.g. in hardware & software verification, design automation, AI planning, … • Large industrial benchmarks (10K+ vars) are solved within seconds by state-of-the-art complete/systematic SAT solvers • Even 100K or 1M not completely out of question • Good scaling behavior seems to defy “NP-completeness”! Real-world problems have tractable sub-structure “Backdoors” help explain how solvers canget “smart” and solve very large instances not quite Horn-SATor 2-SAT…

  3. (~500 vars) Backdoors to Tractability A notion to capture “hidden structure” Informally: A backdoor to a given problem is a subset of its variables such that, once assigned values, the remaining instance simplifies to a tractable class. Formally: define a notion of a poly-time “sub-solver” handles tractable substructure of problem instancee.g. unit prop., pure literal elimination, CP filtering, LP solver, … • Weak backdoors for finding feasible solutions • Strong backdoorsfor finding feasible solutions or proving unsatisfiability

  4. Are backdoors small in practice? Enough to branch on backdoor variables to “solve” the formula heuristics need to be good on only a few vars The notion of backdoors has provided powerful insights, leading totechniques like randomization, restarts, and algorithm portfolios for SAT

  5. This Talk: Motivation • “Traditional” backdoors are defined for a basic tree-search procedure, such as pure DPLL • Oblivious to the now-standard (and essential) feature of learning during search, i.e, clause learning for DPLL • Note: state-of-the-art SAT solvers rely heavily on clause learning, especially for industrial and crafted instances • provably leads to shorter proofs for many unsatisfiable formulas • significant speed-up on satisfiable formulas as well Does clause learning allow for smaller backdoorswhen capturing hidden structure in SAT instances?

  6. This Talk: Contribution Affirmative answer: • First, must extend the notion of backdoors to clause learning SAT solvers: take ‘order-sensitivity’ into account • Theoretically, learning-sensitive backdoors for SAT solvers with clause learning (“CDCL solvers”) can be exponentially smallerthan traditional strong backdoors • Initial empirical results suggesting that in practice, • More learning-sensitive backdoors than traditional (of a given size) • SAT solvers often find much smaller learning-sensitive backdoors than traditional ones

  7. DPLL Search with Clause Learning Input: CNF formula F At every search node: • branch by setting a variable to True or False;current partial variable assignment:  • consider simplified sub-formula F| • apply a poly-time inference procedure to F|(e.g. unit prop., pure literal test, failed literal test / “probing”) • Contradiction learn a conflict clause • Solution  declare satisfiable and exit • Not solved continue branching “sub-solver” for SAT

  8. { Backdoor Traditional Backdoor Search Tree to Solution { x x =0 Backdoor? =1 =1 y y y =0 =0 =1 =0 z =1 w Contradiction: Conflict clauselearned Sub-solver infers solutionwith help fromlearned clauses =1 Early contradictiondue to previouslylearned clause Sub-solverinfers solution Backdoors and Search with Learning Search order matters!

  9. “Traditional” Backdoors Definition [Williams, Gomes, Selman ’03]: A subset B of variables is a strong backdoor(for Fw.r.t. a sub-solver S)if for every truth assignment to variables in B, S “solves” F|. Issue: oblivious to “previously” learned clauses; sub-solver must infer contradiction on F| for every  from scratch. either finds a satisfying assignment for For proves that F is unsatisfiable

  10. New: Learning-Sensitive Backdoors Definition: A subset B of variables is a learning-sensitive backdoor(for F w.r.t. a sub-solver S)if there exists a search orders.t. a clause learning solver • branching only on the variables in B • in this search order • with S as the sub-solver at each leaf “solves” F. either finds a satisfying assignment for For proves that F is unsatisfiable

  11. Theoretical Results

  12. Learning-Sensitive Backdoors Can Provably be Much Smaller Setup: • Sub-solver: unit propagation • Clause learning scheme: 1-UIP • Comparison w.r.t. traditional strong backdoors Theorem 1: There are unsatisfiable SAT instances for which learning-sensitive backdoors are exponentially smaller than the smallest traditional strong backdoors. Theorem 2: There are satisfiable SAT instances for which learning-sensitive backdoors are smaller than the smallest traditional strong backdoors. used Rsat for experiments

  13. Proof Idea: Simple Example {x} is a learning-sensitive backdoor (of size 1) : Learn 1-UIP clause: (q) p1 a a b b q x=0 contradiction contradiction p2 r x=1 q With clause learning, branching on xin the right order suffices to prove unsatisfiability (x appears only in a “long” clause)

  14. Proof Idea: Simple Example • In contrast, without clause learning, must branch onat least 2 variables in every proof of unsatisfiability! • every “traditional” strong backdoor is of size ≥ 2 • Why? • every variable, in at least one polarity, only in “long” clausese.g., p1, q, r, a do not appear in any 2-clauses • therefore, no unit prop. or empty clause generation by fixing this variable to this value • therefore, this variable by itself cannot be a strong backdoor

  15. Proof Idea: Exponential Separation Construct an unsatisfiable formula F on n vars. such that • certain long clauses must be used in every refutation(i.e., removing a long clause makes F satisfiable) • many variables in at least one polarity appear only in such long clauses with (n) variables • Controlled unit propagation / empty clause generation • Must branch on essentially all variables of the long clauses to derive a contradiction • Such variables must be part of every traditional backdoor set • With learning: conflict clauses from previous branches on O(log n) “key variables” enable unit prop. in long clauses

  16. Order-Sensitivity of Backdoors Corollary (follows from the proof of Theorem 1) : There are unsatisfiable SAT instances for which learning-sensitive backdoors w.r.t. one value ordering are exponentially smaller than the smallest learning-sensitive backdoors w.r.t. another value ordering.

  17. Experimental evaluation

  18. Learning-Sensitive Backdoors in Practice • Preliminary evaluation of smallest backdoor size Reporting “best found” backdoors over 5000 runs ofRsat (with clause learning) or Satz-rand (no learning) : • up to 10x smaller than traditional on satisfiable instances • often 2x or less smaller than traditional on unsatisfiable instances

  19. How hard is it to find small backdoor sets with learning? Recently reported in a paper at CPAIOR-09(backdoors in the context of optimization problems) • Considering only the size of the smallest backdoor does not provide much insight into this question • One way to assess this difficulty: • How many backdoors are there of a given cardinality? • Experimental setup: • For each possible backdoor size k, sample uniformly at random subsets of cardinality k from the (discrete) variables of the problem • For each subset, evaluate whether it is a backdoor or not

  20. Backdoor Size Distribution E.g., for a Mixed Integer Programming (MIP)optimization instance:

  21. Added Power of Learning E.g., for a Mixed Integer Programming (MIP)optimization instance:

  22. Summary • Defined backdoors in the context of learning during search (in particular, clause learning for SAT solvers) • Proved that learning-sensitive backdoors can be smaller than traditional strong backdoors • Exponentially smaller on unsatisfiable instances • Somewhat smaller on satisfiable instances (open?) • Branching order affects backdoor size as well Future work: stronger separation for satisfiable instances; detailed empirical study

More Related