1 / 38

Constrainedness

Constrainedness. Including slides from Toby Walsh. Constraint satisfaction. Constraint satisfaction problem (CSP) is a triple <V,D,C> where: V is set of variables Each X in V has set of values, D_X Usually assume finite domain {true,false}, {red,blue,green}, [0,10], …

Download Presentation

Constrainedness

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Constrainedness Including slides from Toby Walsh

  2. Constraint satisfaction • Constraint satisfaction problem (CSP) is a triple <V,D,C> where: • V is set of variables • Each X in V has set of values, D_X • Usually assume finite domain • {true,false}, {red,blue,green}, [0,10], … • C is set of constraints Goal: find assignment of values to variables to satisfy all the constraints

  3. Constraint solver • Tree search • Assign value to variable • Deduce values that must be removed from future/unassigned variables • Constraint propagation • If any future variable has no values, backtrack else repeat • Number of choices • Variable to assign next, value to assign Some important refinements like nogood learning, non-chronological backtracking, …

  4. Constraint propagation • Arc-consistency (AC) • A binary constraint r(X1,X2) is AC iff for every value for X1, there is a consistent value (often called support) for X2 and vice versa E.g. With 0/1 domains and the constraint X1 =/= X2 Value 0 for X1 is supported by value 1 for X2 Value 1 for X1 is supported by value 0 for X2 … • A problem is AC iff every constraint is AC

  5. Tree search • Backtracking (BT) • Forward checking (FC) • Backjumping (BJ, CBJ, DB) • Maintaining arc-consistency (MAC) • Limited discrepancy search (LDS) • Non-chronological backtracking & learning • Probing • …

  6. Modelling • Choose a basic model • Consider auxiliary variables • To reduce number of constraints, improve propagation • Consider combined models • Channel between views • Break symmetries • Add implied constraints • To improve propagation

  7. Propositional Satisfiability • SAT • does a truth assignment exist that satisfies a propositional formula? • special type of constraint satisfaction problem • Variables are Boolean • Constraints are formulae • NP-complete • 3-SAT • formulae in clausal form with 3 literals per clause • remains NP-complete (x1 v x2) & (-x2 v x3 v -x4) x1/ True, x2/ False, ...

  8. Random 3-SAT • Random 3-SAT • sample uniformly from space of all possible 3-clauses • n variables, l clauses • Which are the hard instances? • around l/n = 4.3 What happens with larger problems? Why are some dots red and others blue?

  9. Random 3-SAT • Varying problem size, n • Complexity peak appears to be largely invariant of algorithm • backtracking algorithms like Davis-Putnam • local search procedures like GSAT What’s so special about 4.3?

  10. Random 3-SAT • Complexity peak coincides with solubility transition • l/n < 4.3 problems under-constrained and SAT • l/n > 4.3 problems over-constrained and UNSAT • l/n=4.3, problems on “knife-edge” between SAT and UNSAT

  11. Theoretical results • Shape • “Sharp” (in a technical sense) [Friedgut 99] • Location • 2-SAT occurs at l/n=1 [Chavatal & Reed 92, Goerdt 92] • 3-SAT occurs at 3.26 < l/n < 4.598

  12. “But it doesn’t occur in X?” • X = some NP-complete problem • X = real problems • X = some other complexity class

  13. “But it doesn’t occur in X?” • X = some NP-complete problem • Phase transition behaviour seen in: • TSP problem (decision not optimization) • Hamiltonian circuits (but NOT a complexity peak) • number partitioning • graph colouring • independent set • ...

  14. “But it doesn’t occur in X?” • X = real problems • Phase transition behaviour seen in: • job shop scheduling problems • TSP instances from TSPLib • exam timetables @ Edinburgh • Boolean circuit synthesis • Latin squares (alias sports scheduling) • ...

  15. “But it doesn’t occur in X?” • X = some other complexity class • Phase transition behaviour seen in: • polynomial problems like arc-consistency • PSPACE problems like QSAT and modal K • ...

  16. Algorithms at the phase boundary What do we understand about problem hardness at the phase boundary? How can this help build better algorithms?

  17. Kappa Defined for an ensemble of problems

  18. Kappa (motivation)

  19. Kappa When every state is a solution When there is a unique solution When every state is not a solution

  20. Kappa Easy Soluble When every state is a solution On the knife edge When there is a unique solution Hard When every state is not a solution Easy Insoluble

  21. An Example: random CSP’s Each of n variables has a uniform domain size m There is a probability p1 of a constraint between a pair of variables There is a probability p2 that a pair of values conflict (when a constraint exists)

  22. An Example: random CSP’s

  23. An Example: random CSP’s Can rearrange this formula so that we find the value of p2 such that we are at the phase transition, i.e. when kappa = 1 Then perform experiments to see if it is accurate predictor i.e. we put theory to the test

  24. Kappa as a heuristic Minimise that

  25. An Example: not random CSP’s

  26. An Example: not random CSP’s

  27. Kappa is general (3 examples) SAT with n variables, l clauses, a literals/clause Graph colouring, n vertices, e edges, m colours Number partitioning, n numbers, range (0,l], m bags with same sum

  28. Kappa as a heuristic Minimise that

  29. Minimise that But that’s costly to do. Is there a low cost surrogate? • assume we only measure domain size (denominator) • assume we only measure top line (numerator) • assume we measure top and bottom line (numerator and denominator

  30. Looking inside search • Three key insights • constrainedness “knife-edge” • backbone structure • 2+p-SAT • Suggests branching heuristics • also insight into branching mistakes

  31. Constrainedness knife-edge kappa against depth/n

  32. Constrainedness knife-edge • Seen in other problem domains • number partitioning, … • Seen on “real” problems • exam timetabling (alias graph colouring) • Suggests branching heuristic • “get off the knife-edge as quickly as possible” • minimize or maximize-kappa heuristics must take into account branching rate, max-kappa often therefore not a good move!

  33. Minimize constrainedness • Many existing heuristics minimize-kappa • or proxies for it • For instance • Karmarkar-Karp heuristic for number partitioning • Brelaz heuristic for graph colouring • Fail-first heuristic for constraint satisfaction • … • Can be used to design new heuristics • removing some of the “black art”

  34. Backbone • Variables which take fixed values in all solutions • alias unit prime implicates • Let fk be fraction of variables in backbone • l/n < 4.3, fk vanishing (otherwise adding clause could make problem unsat) • l/n > 4.3, fk > 0 discontinuity at phase boundary!

  35. Backbone • Search cost correlated with backbone size • if fk non-zero, then can easily assign variable “wrong” value • such mistakes costly if at top of search tree • Backbones seen in other problems • graph colouring • TSP • … Can we make algorithms that identify and exploit the backbone structure of a problem?

  36. 2+p-SAT • 2-SAT is polynomial (linear) but 3-SAT is NP-complete • 2-SAT, unlike 3-SAT, has no backbone discontinuity • Morph between 2-SAT and 3-SAT • fraction p of 3-clauses • fraction (1-p) of 2-clauses • 2+p-SAT maps from P to NP • p>0, 2+p-SAT is NP-complete

More Related