1 / 19

Heuristics for Efficient SAT Solving

Heuristics for Efficient SAT Solving. As implemented in GRASP , Chaff and GSAT. . X. . X. X. X. X. A Basic SAT algorithm (1/3). Given  in CNF: (x,y,z),(-x,y),(-y,z),(-x,-y,-z). Decide(). Deduce(). Resolve_Conflict(). A Basic SAT algorithm (2/3). Choose the next

kalare
Download Presentation

Heuristics for Efficient SAT Solving

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Heuristics forEfficient SAT Solving As implemented in GRASP, Chaff and GSAT.

  2. X  X X X X A Basic SAT algorithm (1/3) Given  in CNF: (x,y,z),(-x,y),(-y,z),(-x,-y,-z) Decide() Deduce() Resolve_Conflict()

  3. A Basic SAT algorithm (2/3) Choose the next variable and value. Return False if all variables are assigned While (true) { if (!Decide()) return (SAT); while (!Deduce()) if (!Resolve_Conflict()) return (UNSAT); } Apply unit clause rule. Return False if reached a conflict Backtrack until no conflict. Return False if impossible

  4. A Basic SAT algorithm (3/3) bool Resolve_Conflict() { d = most recent decision not 'tried both ways’; if (d == NULL) return FALSE; flip the value of d; mark d as 'tried both ways’; undo invalidated implications; return true; }

  5. Chaff enjoyed a ‘profiler-based’ development process. A large set of CNF instances were used to identify the bottleneck of classical SAT solving. First conclusion: almost 90% of the time SAT solvers perform Deduction(). Second conclusion: dynamic decision strategies impose large overhead. Deduction() allocates new implied variables and conflicts. How can this be done efficiently ?

  6. Grasp implements Deduction() with counters (1/2) Hold 4 counters for each clause : Val1_neg_literals() - # of negative literals assigned 0 in . Val1_pos_literals() - # of positive literals assigned 1 in . Val0_neg_literals() - # of negative literals assigned 1 in . Val0_pos_literals() - # of positive literals assigned 0 in . Define: val1() = Val1_pos_literals() + Val1_neg_literals() val0() = Val0_pos_literals() + Val0_neg_literals() || = # literals in 

  7. Grasp implements Deduction() with counters (2/2)  is satisfied iff val1()> 0  is unsatisfied iff val0()=||  is unit iff val1()= 0 val0() = || - 1  is unresolved iff val1()= 0  val0() < ||- 1 . . Every assignment to a variable x results in updating the counters for all the clauses that contain x. Backtracking: Same complexity.

  8. Chaff implements Deduction() with a pair of observers (1/3) Observation: during Deduction(), we are only interested in newly implied variables and conflicts. These occur only when the number of literals in  with value ‘false’ is greater than || - 2. (val0() >|| - 2) Define two ‘observers’: O1(), O2(). O1() and O2() point to two distinct  literals which are not ‘false’.  becomes unit if updating one observer leads to O1() = O2(). Chaff visits clause  only if O1() or O2() become ‘false’.

  9. O1 O2 V[2]=0 V[1]=0 V[5]=0, v[4]= 0 Backtrack v[4] = v[5]= X v[1] = 1 Unit clause Chaff implements Deduction() with a pair of observers (2/3) Both observers of an implied clause are on the highest decision level present in the clause. Therefore, backtracking will un-assign them first. Conclusion: when backtracking, observers stay in place. Backtracking: 0 complexity.

  10. Chaff implements Deduction() with a pair of observers (3/3) The choice of observing literals is important. Best strategy is - the least frequently updated variables. The observers method has a learning curve in this respect: 1. The initial observers are chosen arbitrarily. 2. The process shifts the observers away from variables that were recently updated. These variables will most probably be reassigned in a short time. In our example: the next time v[5] is updated, it will point to a significantly smaller set of clauses.

  11. # clauses per var Probability of observing # clauses per var Assignments + backtracks Negative assignments Grasp Chaff Avg. Length of clause (e.g. for 3-sat that’s 1/6). The # of updating operations ratio is A (rough) quantitative comparison l: # literals, v: # vars, c: # clauses, d: # decisions + implied vars Several missing factors in the equations: 1) the complexity of each updating operation. 2) the learning curve of the observers.

  12. Decision heuristics (1/3) Grasp's default (DLIS) - Choose the literal which appears most frequently in unresolved clauses. (Requires l queries for each decision).

  13. Decision heuristics (2/3) Chaff’s default: (VSIDS) - (Variable State Independent Decaying Sum) 1. Each variable in each polarity has a counter initialized to 0. 2. When a clause is added, the counters are updated. 3. The unassigned variable with the highest counter is chosen. 4. Periodically, all the counters are divided by a constant. • Chaff holds a list of unassigned variables sorted by the counter value. • Updates are needed only when adding conflict clauses. • Thus - decision is made in constant time.

  14. Decision heuristics (3/3) VSIDS is a ‘quasi-static’ strategy: - static because it doesn’t depend on var state - dynamic because it gradually changes. Variables that appear in recent conflicts have higher priority. This strategy is a conflict-driven decision strategy. “..employing this strategy dramatically (i.e. an order of magnitude) improved performance ... “

  15. GSAT: A different approach to SAT solving Given a CNF formula , choose max_tries and max_flips for i = 1 to max_tries { T := randomly generated truth assignment for j = 1 to max_flips { if T satisfies  return TRUE choose v s.t. flipping v’s value gives largest increase in the # of satisfied clauses (break ties randomly). T := T with v’s assignment flipped. } }

  16. Improvement # 1: clause weights Initial weight of each clause: 1 Increase by k the weight of unsatisfied clauses. Choose v according to max increase in weight Clause weights is another example of conflict-driven decision strategy.

  17. Improvement # 2: Averaging-in Q: Can we reuse information gathered in previous tries in order to speed up the search ? A: Yes! Rather than choosing T randomly each time, repeat ‘good assignments’ and choose randomly the rest.

  18. Improvement # 2: Averaging-in (cont’d) Let X1, X2 and X3 be equally wide bit vectors. Define a function bit_average : X1X2 X3 as follows: b1ib1i=b2i random otherwise (where bji is the i-th bit in Xj, j{1,2,3}) b3i :=

  19. Improvement # 2: Averaging-in (cont’d) Let Tiinitbe the initial assignment (T) in cycle i. Let Tibestbe the assignment with highest # of satisfied clauses in cycle i. T1init:= random assignment. T2init:= random assignment. i > 2, Tiinit:= bit_average(Ti-1best, Ti-2best)

More Related