1 / 32

Balance and Filtering in Structured Satisfiability Problems

Balance and Filtering in Structured Satisfiability Problems. Henry Kautz University of Washington joint work with Yongshao Ruan (UW), Dimitris Achlioptas (MSR), Carla Gomes (Cornell), Bart Selman (Cornell), Mark Stickel (SRI) CORE – UW, MSR, Cornell. Speedup Learning.

Download Presentation

Balance and Filtering in Structured Satisfiability Problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Balance and Filtering in Structured Satisfiability Problems Henry Kautz University of Washington joint work with Yongshao Ruan (UW), Dimitris Achlioptas (MSR), Carla Gomes (Cornell), Bart Selman (Cornell), Mark Stickel (SRI) CORE – UW, MSR, Cornell

  2. Speedup Learning • Machine learning historically considered • Learning to classify objects • Learning to search or reason more efficiently • Speedup Learning • Speedup learning disappeared in mid-90’s • Last workshop in 1993 • Last thesis 1998 • What happened? • EBL (without generalization) “solved” • rel_sat (Bayardo), GRASP (Silva 1998), Chaff (Malik 2001) – 1,000,000 variable verification problems • EBG too hard • algorithmic advances outpaced any successes

  3. Alternative Path • Predictive control of search and reasoning • Learn statistical model of behavior of a problem solver on a problem distribution • Use the model as part of a control strategy to improve the future performance of the solver • Synthesis of ideas from • Phase transition phenomena in problem distributions • Decision-theoretic control of reasoning • Bayesian modeling

  4. control / policy dynamic features resource allocation / reformulation Big Picture runtime Solver Problem Instances Learning / Analysis static features Predictive Model

  5. Case Study: Beyond 4.25 runtime Solver Problem Instances Learning / Analysis static features Predictive Model

  6. Phase transitions & problem hardness • Large and growing literature on random problem distributions • Peak in problem hardness associated with critical value of some underlying parameter • 3-SAT: clause/variable ratio = 4.25 • Using measured parameter to predict hardness of a particular instance problematic! • Random distribution must be a good model of actual domain of concern • Recent progress on more realistic random distributions...

  7. Quasigroup Completion Problem (QCP) • NP-Complete • Has structure is similar to that of real-world problems - tournament scheduling,classroom assignment, fiber optic routing, experiment design, ... • Start with empty grad, place colors randomly • Generates mix of sat and unsat instances

  8. Complexity Graph 20% 20% 42% 42% 50% 50% Phase Transition Critically constrained area Underconstrained area Overconstrained area Phase transition Almost all solvable area Almost all unsolvable area Fraction of unsolvable cases Fraction of pre-assignment

  9. Quasigroup With Holes (QWH) • Start with solved problem, then punch holes • Generates only SAT instances • Can use to test incomplete solvers • Hardness peak at phase transition in size of backbone • (Achlioptas, Gomes, & Kautz 2000)

  10. New Phase Transition in Backbone % Backbone % of Backbone Computational cost % holes

  11. Walksat Order 30, 33, 36 Easy-Hard-Easy pattern in local search Computational Cost Underconstrained area “Over” constrained area % holes

  12. Are we ready to predict run times? • Problem: high variance log scale

  13. Rectangular Pattern (Hall 1945) Aligned Pattern new result! Balanced Pattern Tractable Very hard Deep structural features Hardness is also controlled by structure of constraints, not just the fraction of holes

  14. Random versus balanced Balanced Random

  15. Random versus balanced Balanced Random

  16. Random vs. balanced (log scale) Balanced Random

  17. Morphing balanced and random order 33

  18. Considering variance in hole pattern order 33

  19. Time on log scale order 33

  20. Effect of balance on hardness • Balanced patterns yield (on average) problems that are 2 orders of magnitude harder than random patterns • Expected run time decreases exponentially with variance in # holes per row or column • Same pattern (differ constants) for DPPL! • At extreme of high variance (aligned model) can prove no hard problems exist

  21. Morphing random and rectangular order 36

  22. Morphing random and rectangular artifact of walksat order 33

  23. Morphing Balanced Random Rectangular 100 10 time (seconds) 1 0.1 0 5 10 15 20 order 33 variance

  24. Intuitions • In unbalanced problems it is easier to identify most critically constrained variables, and set them correctly • Backbone variables

  25. Are we done? Not yet... • Observation 1: While few unbalanced problems are hard, quite a few balanced problems are easy • To do: find additional structural features that predict hardness • Introspection • Machine learning (Horvitz et al. UAI 2001) • Ultimate goal: accurate, inexpensive prediction of hardness of real-world problems

  26. Are we done? Not yet… • Observation 2: Significant differences in the SAT instances in hardest regions for the QCP and QWH generators QCP(sat only) QWH

  27. Biases in Generators • An unbiased SAT-only generator would sample uniformly at random from the space of all SAT CSP problems • Practical CSP generators • Incremental arc-consistency introduces dependencies • Hard to formally model the distribution • QWH generator • Clean formal model • Slightly biased toward problems with many solutions • Adding balance makes small, hard problems

  28. balanced QCP balanced QWH random QCP random QWH

  29. balanced QCP balanced QWH random QCP random QWH

  30. balanced QCP balanced QWH random QCP random QWH

  31. balanced QCP balanced QWH random QCP random QWH

  32. Conclusions • One small part of an exciting direction for improving power of search and reasoning algorithms • Hardness prediction can be used to control solver policy • Noise level (Patterson & Kautz 2001) • Restarts (Horvitz et al (CORE team ) UAI 2001) • Lots of opportunities for cross-disciplinary work • Theory • Machine learning • Experimental AI and OR • Reasoning under uncertainty • Statistical physics

More Related