1 / 54

Accelerating Random Walks

Accelerating Random Walks. Wei Wei Dept. of Computer Science Cornell University (joint work with Bart Selman). Introduction. This talk deals with gathering new information to accelerate search and reasoning.

leda
Download Presentation

Accelerating Random Walks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Accelerating Random Walks Wei Wei Dept. of Computer Science Cornell University (joint work with Bart Selman)

  2. Introduction • This talk deals with gathering new information to accelerate search and reasoning. • Although the method proposed can to be generalized to other search problems, we focus on SAT problem in this talk.

  3. Boolean Satisfiability Problem • Boolean Satisfiability Problem (SAT) asks if a Boolean expression can be made true by assigning Boolean values to its variables. • The problem is well-studied in AI community with direct application in reasoning, planning, CSP etc. • Does statement s hold in world A (represented by a set of clauses)? Aᅣ s  (¬s) ^ A unsatisfiable

  4. SAT • SAT (even 3SAT) is NP-complete. Best theoretical bound so far is O(1.334N) for 3SAT (Schoening 1999) • In practice, there are two different kinds of solvers • DPLL (Davis, Logemann and Loveland 1962) • Local Search (Selman et al 1992)

  5. DPLL (x1  x2  x3)  (x1 x2   x3)  (x1 x2  x3) • DPLL was first proposed as a simple depth-first tree search. x1 T F x2 x2 T F null solution

  6. DPLL • Recently (since late 90’s), many improvements: Randomization, restart, out-of-order backtracking, and clause learning

  7. Local Search • The idea: Start with a random assignment. And make local changes to the assignment until a solution is reached • Pro: often efficient in practice. Sometimes the only feasible way for some problems • Con: Cannot prove nonexistence of solutions. Hard to analyze theoretically

  8. Local Search • Initially, pure hill-climbing greedy search: Procedure GSAT Start with a random truth assignment Repeat s:= set of neighbors of current assignment that satisfies most clauses pick an assignment in s randomly and move to that new assignment Until a satisfying assignment is found

  9. Local Search, cont. • Later, other local search schemes used: • Simulated annealing • Tabu search • Genetic algorithm • Random Walk and its variants  the most successful so far

  10. Unbiased (Pure) Random Walk for SAT Procedure Random-Walk (RW) Start with a random truth assignment Repeat c:= an unsatisfied clause chosen at random x:= a variable in c chosen at random flip the truth value of x Until a satisfying assignment is found

  11. Random Walk algorithms • Random walk algorithm (e.g. Walksat) offer significant improvement on performance over hill-climbing algorithms.

  12. New approached to speed up random walks • Next, we’ll discuss how to use new knowledge to improve random walk algorithm • STAGE – use more state information • Long distance link discovery – transform the search space

  13. I) STAGE algorithmBoyan and Moore 1998 • Idea: more features of the current state may help local search • Task: to incorporate these features into improved evaluation functions, and help guide search

  14. Method • The algorithm learns function V: the expected outcome of a local search algorithm given an initial state. y V(x) x x Can this function be learned successfully?

  15. Features • State feature vector: problem specific Example: for 3SAT, following features are useful: • % of clauses currently unsat (=obj function) • % of clauses satisfied by exactly 1 variable • % of clauses satisfied by exactly 2 variables • % of variables set to their naïve setting

  16. Learner • Fitter: can be any function approximator; polynomial regression is used in practice. • Training data: generated on the fly; every LS trajectory produces a series of new training data. • Restrictions on LS: it must terminate in poly-time; it must be Markovian.

  17. Diagram of STAGE Run LSto Optimize Obj new training data Train the approximator Hillclimb to Optimize V Generate good start point

  18. Results • Works on many domain, such as bin-packing, channel routing, SAT Table gives average solution quality

  19. Discussion • Is the learned function a good approximation to V? – Somewhat unclear. (“worrisome”: performance is not improved when authors used quadratic regression to replace linear regression. Learning does help however.) • Why not learn a better objective function and search on that function directly (clause weighing, adding clauses)?

  20. Method • The algorithm learns function V: the expected outcome of a local search algorithm given an initial state y V(x) x x

  21. II) Long-distance Link Discovery • Random walk-style methods are successful on hard randomly generated instances, as well as on a number of real-world benchmarks. • However, they are generally less effective in highly structured domains compared to backtrack methods such as DPLL. • Key issue: random walk needs O(N2) flips to propagate dependencies among variables, while in unit-propagation in DPLL takes only O(N).

  22. Overview • Random Walk Strategies - unbiased random walk - biased random walk • Chain Formulas - binary chains - ternary chains • Practical Problems • Conclusion and Future Directions

  23. Unbiased (Pure) Random Walk for SAT Procedure Random-Walk (RW) Start with a random truth assignment Repeat c:= an unsatisfied clause chosen at random x:= a variable in c chosen at random flip the truth value of x Until a satisfying assignment is found

  24. Unbiased RW on any satisfiable 2SAT Formula • Given a satisfiable 2SAT formula with n variables, a satisfying assignment will be reached by Unbiased RW in O(n2) steps with high probability. (Papadimitriou, 1991) • Elegant proof! (next)

  25. Given a satisfiable 2-SAT formula F. RW starts with a random truth assignment A0. Consider an unsatisfied clause: (x_3 or (not x_4)) A0 must have x_3 False and x_4 True (both “wrong”) A satisfying truth assignment, T, must have x_3 Trueor x_4 False(or both) Now, “flip” truth value of x_3 or x_4. With (at least) 50% chance, Hamming distance to satisfyingassignment T is reduced by 1. I.e., we’re moving the right direction! (of course, with 50% (or less) we are moving in the wrong direction… doesn’t matter!)

  26. T A0 T A0 We have an unbiased random walk with a reflecting barrier at distance N from T (max Hamming distance) and an absorbing barrier (satisfying assignment) at distance 0. We start at a Hamming distance of approx. ½ N. Property of unbiased random walks: after N^2 flips, with high probability, we will hit the origin (the satisfying assignment). (Drunkards walk) So, O(N^2) randomized algorithm (worst-case!) for 2-SAT.

  27. Unfortunately, does not work for k-SAT with k>= 3.  Reason: example unsat clause: (x_1 or (not x_4) or x_5) now only 1/3 chance (worst-case) of making the right flip!

  28. Unbiased RW on 3SAT Formulas T A0 Random walk takes exponential number of steps to reach 0.

  29. Comments on RW • Random Walk is highly “myopic” does not take into account any gradient of the objective function (= number of unsatisfied clauses)! Purely “local” fixes. • Can we make RW practical for SAT? Yes --- inject greedy bias into walk  biased Random Walk.

  30. Biased Random Walk(1st minimal greedy bias) Procedure Random-Walk-with-Freebie (RWF) Start with random truth assignment Repeat c:= an unsatisfied clause chosen at random if there exist a variable x in c with break value = 0 // greedy bias flip the value of x (a “freebie” flip) else x:= a variable in c chosen at random // pure walk flip the value of x Until a satisfying assignment is found break value == # of clauses that become unsatisfied because of flip.

  31. Biased Random Walk(adding more greedy bias) Procedure WalkSat Repeat c:= an unsatisfied clause chosen at random if there exist a variable x in c with break value = 0 // greedy bias flip the value of x (freebie move) else with probability p // pure walk x:= a variable in c chosen at random flip the value of x with probability (1-p) x:= a variable in c with smallest break value // more greedy bias flip the value of x Until a satisfying assignment is found Note: tune parameter p.

  32. Chain Formulas • To better understand the behavior of pure and biased RW procedures on SAT instances, we introduce Chain Formulas. • These formulas have long chains of dependencies between variables. • They effectively demonstrate the extreme properties of RW style algorithms.

  33. Binary Chains • Consider formulas 2-SAT chain, F2chain x1  x2 x2  x3 … xn-1  xn xn  x1 Note: Only two satisfying assignments --- TTTTTT … and FFFFFF…

  34. Binary Chains Walk is exactly balanced.

  35. Binary Chains • We obtain the following theorem Theorem 1.With high probability, the RW procedure takes Q(n2) steps to find a satisfying assignment of F2chain. • DPLL algorithm’s unit propagation mechanism finds an assignment for F2chain in linear time. • Greedy bias does not help in this case: both RWF and WalkSat takes Q(n2) flips to reach a satisfying assignment on these formulas.

  36. Speeding up Random Walks on Binary Chains Pure binary chainBinary chain with redundancies (implied clauses) Aside: Note small-world flavor (Watts & Strogatz 99, Kleinberg 00).

  37. Results: Speeding up Random Walks on Binary Chains *: empirical results **: theoretical proof available Becomes almost like unit prop.

  38. Ternary Chains In general, even a small bias in the wrong direction leads to exponential time to reach 0.

  39. Ternary Chains • Consider formulas F3chain, low(i) x1 x2 x1 x2  x3 … xlow(i) xi-1  xi … xlow(n) xn-1  xn Note: Only one satisfying assign.: TTTTT… *These formulas are inspired by Prestwich [2001]

  40. Ternary Chains short link medium link long link Effect of X1 and X2 needs to propagate through chain.

  41. Theoretical Results on 3-SAT Chains low(i) captures how far back the clauses reach.

  42. Proof • The proofs of these claims are quite involved, and are available at http://www.cs.cornell.edu/home/selman/weiwei.pdf • Here, just the intuitions. • Each RW process on these formulas can be decomposed into a series of decoupled, simpler random walks.

  43. 1/3 1/3 1/3 111001 101101 111111 Simple case: single “0” 111101  … 111111 zi 111101 Exp. # steps decomposes: Exp # steps from 111011 + Exp # steps from 111101 ? Zi is the assignment with all 1’s except for ith position 0.

  44. Recurrence Relations Our formula structure gives us: E(f(zi)) = (E(f(zlow(i)) + E(f(zi) + 1) * 1/3 + (E(f(zi-1) + E(f(zi) + 1) * 1/3 + 1 * 1/3  E(f(zi)) = E(f(zlow(i)) + E(f(zi-1) + 3

  45. Recurrence Relations • Solving this recurrence for different low(i)’s, we get This leads to the complexity results for the overall RW.

  46. 110101 110101 111101 110111 110101 010101 010111 010101 010001 010001 111001 110001 110111 110001 100001 100111 100001 111101 111111 110001 110111 110001 111001 111111 111001 101001 101001 101001 111001 111001 111001 110111 111111 111101 111101 111101 111111 111111 111111 Decompose: multiple “0”s Start Sat assign.

  47. Results for RW on 3-SAT chains.

  48. Recap Chain Formula Results • Adding implied constraints capturing long-range dependencies speeds random walk on 2-Chain to near linear time. • Certain long-range dependencies in 3-SAT lead to poly-time convergence of random walks. • Can we take advantage of these results on practical problem instances? Yes! (next)

  49. Results on Practical Benchmarks • Idea: Use a formula preprocessor to uncover long-range dependencies and add clauses capturing those dependencies to the formula. • We adapted Brafman’s formula preprocessor to do so. (Brafman 2001) • Experiments on recent verification benchmark. (Velev 1999)

  50. Empirical Results SSS-SAT-1.0 instances (Velev 1999). 100 total. a level of redundancy added (20% near optimal)

More Related