The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques*

Download Presentation

The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques*

Loading in 2 Seconds...

- 82 Views
- Uploaded on
- Presentation posted in: General

The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques*

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques*

Karl J. Lieberherr

Northeastern University

Boston

joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

Title inspired by a paper by Carla Gomes / David Shmoys

We invent a simple game, called the Evergreen Game, which is about generating and solving Boolean MAX-CSP problems. The fallouts from the Evergreen Game are surprising:

- Although the game is about constructing and solving MAX-CSP problems, simple, efficient algorithms are sufficient to guarantee a draw. The best game-playing strategy leads to a significant reduction of the huge search space for both formula generation and solving.

- Fallouts (continued)
- The Evergreen Game shows us how to systematically translate a CSP formula into a polynomial that is fundamental in playing the game well.
- We have some (but incomplete) evidence that those polynomials are useful for efficient MAX-CSP as well as MAX-SAT and SAT solvers.

- Introduction
- The Evergreen Game
- The Evergreen Player as Preprocessor
- Some Experimental Results

- SAT: classic problem in complexity theory
- SAT & MAX-SAT Solvers: working on CNFs (a multi-set of disjunctions).
- Boolean CSP: constraint satisfaction problem
- Each constraint uses a Boolean relation.
- e.g. a Boolean relation 1in3(x y z) is satisfied iff exactly one of its parameters is true.

- Boolean MAX-CSPa multi-set of constraints.

- Boolean MAX-CSP(G) for rank d, G = set of relations of rank d
- Input
- Input = Bag of Constraint = CSP(G) instance
- Constraint = Relation + Set of Variable
- Relation = int. // Relation number < 2 ^ (2 ^ d) in G
- Variable = int

- Output
- (0,1) assignment to variables which maximizes the number of satisfied constraints.

- Input
- Example Input: G = {22} of rank 3. H =
- 22:1 2 3 0
- 22:1 2 4 0
- 22:1 3 4 0

1in3 has number 22

M = {1 !2 !3 !4} satisfies all

MAX-CSP(G,f):

Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H.

Example: G = {22} of rank 3

MAX-CSP({22},f): H =

22:1 2 3 0

22:1 2 4 0 in MAX-CSP({22},?). Highest value for ?

22:1 3 4 0

22: 2 3 4 0

- Introduction
- The Evergreen Game
- The Evergreen Player as Preprocessor
- Some Experimental Results

- The Evergreen Game is played by two players, Anna and Bob, that take turns creating and solving CSP formulae and paying each other a percentage of a wager based on the fraction of constraints satisfied.
- Let the wager w be 1 million dollars and the constraints limited to Gamma ={OR(x,y), NOT(x)}.

- Anna starts by constructing FInitial =
- {100: NOT(x), 150: NOT(y), 200: OR(x,y)}.

- Bob tries to find an assignment that satisfies the largest possible fraction of constraints. For example, the assignment {x=true, y=false} will satisfy (150+200)/450 approx 0.78. Anna then pays Bob 0.78 million dollars (w*0.78).

- Bob now constructs a formula that Anna solves and pays Anna the percentage of the wager that she solved.

- Now Bob constructs a formula for Anna:
{3: NOT(x),

3: NOT(y),

2: NOT(z)

1: OR(x, y),

1: OR(x, z),

1: OR( y, z)}

- The best assignment that Anna finds is {x=false, y=false, z=true} which satisfies about the fraction 0.72.
- Bob keeps 0.06 million in his pocket.

Game Evergreen(2,2) has polynomial time algorithms Construct(2,2) and Solve(2,2) for Bob so that Bob can achieve a draw even if Anna has unlimited computational resources.

Two players: They agree on a protocol P1 to choose a set of m relations of rank r.

- The players use P1 to choose a set G of m relations of rank r.
- Player 1 constructs a CSP(G) formula H with 1000 variables and gives it to player 2 (1 second limit).
- Player 2 gets paid the fraction of constraints she can satisfy in H (100 seconds limit).
- Take 1 turn and stop.

How would you play this game intelligently?

- http://www.ccs.neu.edu/home/lieber/evergreen/game-life-science.html

sat(H,M) = fraction of satisfied constraints in

CSP(G)-formula H by assignment M

tG = inf max sat(H,M)

all CSP(G)

instances H

all (0,1) assignments M

Find an assignment that is at least as good as tG :

Algorithm Evergreen Player (linear time).

tG = inf max sat(H,M)

all CSP(G)

instances H

all (0,1) assignments M

- Introduction
- The Evergreen Game
- The Evergreen Player as Preprocessor
- Some Experimental Results

- We propose to put the Evergreen Player into action as a preprocessor for state-of-the-art SAT and MAX-SAT solvers.
- Use Evergreen Player to create a maximal assignment J for an input formula F.
- Feed n-map(F,J) to a fast solver.

- Introduction
- The Evergreen Game
- The Evergreen Player as Preprocessor
- Some Experimental Results

- Within the MAX3SAT benchmarks, there are 4 formulae where Toolbar timed out at 1200 seconds. (v70-c700.wcnf ~ v70-c1000.wcnf). Among these formulae, 1 has its ratio gotten worse (0.9985795) and 3 of 4 have their ratio gotten better with the average being roughly 1.0099673.
- Within the 3 MAXCUT benchmarks I've tried, there is one formula where Toolbar timed out at 1200 seconds. This formulae has its ratio unchanged.
- Among all the 20 benchmarks I've finished, 5 of them fall into the time-out category.

- On some benchmarks where no timeout occurred the running time got better (by factors of 2 and 3) in 50 % of the cases with preprocessing.
- Preprocessing is very fast (linear).

- Yices without preprocessing:v2000-c8400average time = 888.048average sat ratio = 0.947143
- Yices with preprocessing:v2000-c8400average time = 0.0342615average sat ratio = 1

- Worth to investigate further.
- Suggests a cheap way to parallelize MAX-SAT and SAT solving: Run preprocessed and unpreprocessed version in parallel.

- The End.

14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0

14: 1 2 = or(1 2)

7: 1 3 = or(!1 !3)

excellent peripheral vision

0 1 2 3 4 5 6 = k

8/9

7/9

Blurry vision

- What do we learn from the abstract representation?
- set 1/3 of the variables to true (maximize).
- the best assignment will satisfy at least 7/9 constraints.
- very useful but the vision is blurry in the “middle”.

appmean = approximation of the mean (k variables true)

- Given a CSP(G)-instance H and an assignment N which satisfies fraction f in H.
- Is there an assignment that satisfies more than f?
- YES (we are done), absH(mb) > f
- MAYBE, The closer absH() comes to f, the better

- Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f
- YES (we are done), H1 = Hk=1, absH1(mb1) > f
- MAYBE, the closer absH1(mb1) comes to f, the better
- NO, UP or clause learning

- Is there an assignment that satisfies more than f?

absH= abstract representation of H

8/9

7/9

abstract representation

14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0

H

3/9

0 1 2 3 4 5 6

14 : 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0

6/7 = 8/9

5/7=7/9

H0

3/7=5/9

maximum assignment away

from max bias: blurry

0 1 2 3 4 5

8/9

7/9

14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0

3/8

H

0 1 2 3 4 5 6

clearly

above

3/4

14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 3 0 7 : 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0

7/8=8/9

6/8=7/9

H1

maximum assignment away

from max bias: blurry

2/7=3/8

0 1 2 3 4 5

14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0

8/9

7/9

abstract representation

guarantees 7/9

H

7/8 = 8/9

6/7=8/9

6/8 = 7/9

5/7=7/9

abstract representation

guarantees 7/9

abstract representation

guarantees 8/9

H0

H1

NEVER GOES DOWN: DERANDOMIZATION

rank 2

10: 1 = or(1)

7: 1 2 = or(!1 !2)

4/6

4/6

10 : 1 0

10 : 2 0

10 : 3 0

7 : 1 2 0

7 : 1 3 0

7 : 2 3 0

3/6

3/6

abstract representation guarantees

0.625 * 6 = 3.75: 4 satisfied.

0 1 2 3

4/6

4/6

4/6

4/6

5 : 1 0

10 : 2 0

10 : 3 0

13 : 1 2 0

13 : 1 3 0

7 : 2 3 0

3/6

3/6

rank 2

5: 1 = or(!1)

13: 1 2 = or(1 !2)

The effect of n-map

- The abstract representation = look-ahead polynomials seems useful for guiding the search.
- The look-ahead polynomials give us averages: the guidance can be misleading because of outliers.
- But how can we compute the look-ahead polynomials?

- Introduction
- Look-forward
- Look-backward
- SPOT: how to use the look-ahead polynomials together with superresolution.

- Why?
- To make informed decisions

- How?
- Abstract representation based on look-ahead polynomials

- The look-ahead polynomial computes the expected fraction of satisfied constraints among all random assignments that are produced with bias p.

1, … ,40

22: 6 7 9 0

22: 12 27 38 0

Abstract representation:

reduce the instance to

look-ahead polynomial

3p(1-p)2 = B1,3(p) (Bernstein)

- H is a CSP(G) instance.
- N is an arbitrary assignment.
- The look-ahead polynomial laH,N(p) computes the expected fraction of satisfied constraints of H when each variable in N is flipped with probability p.

G = {R1, … }, tR(F) = fraction of constraints in F that use R.

appSATR(x) over all R is a

super set of the Bernstein polynomials

(computer graphics, weighted sum of Bernstein polynomials)

x = p

http://graphics.idav.ucdavis.edu/education/CAGDNotes/Bernstein-Polynomials.pdf

all the appSATR(x) polynomials

- Focus on purely mathematical question first
- Algorithmic solution will follow
- Mathematical question: Given a CSP(G) instance. For which fractions f is there always an assignment satisfying fraction f of the constraints? In which constraint systems is it impossible to satisfy many constraints?

MAX-CSP(G,f):

Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H.

Example: G = {22} of rank 3

MAX-CSP({22},f):

22:1 2 3 0

22:1 2 4 0

22:1 3 4 0

22: 2 3 4 0

MAX-CSP({22},f):

For f ≤ u: problem has always a solution

For f≥ u + e: problem has not always a solution, e>0.

1

not always (solid)

u = critical transition point

always (fluid)

0

- u = 4/9

- Use an optimally biased coin
- 1/3 in this case

- In general: min max problem

1,0

2,0

22

60

240

3,0

2,1

3,1

1,1

2,0

3,0

3

15

255

3,1

2,1

22 is expanded into 6 additional

relations.

0

1

41

134

2

2

0

0

1

1

22

73

146

104

2

2

0

0

97

148

1

22 is expanded into 7 additional

relations.

- N-mapped vars Relation#2 1 0 |------------------------0 0 0 | 220 0 1 | 410 1 0 | 731 0 0 | 97 0 1 1 | 1341 0 1 | 146 1 1 0 | 1481 1 1 | 104

MAX-CSP(G,f): For each finite set G of relations

there exists an algebraic number tG

For f ≤ tG: MAX-CSP(G,f) has polynomial solution

For f≥ tG+ e: MAX-CSP(G,f) is NP-complete, e>0.

1

hard (solid)

NP-complete

polynomial solution:

Use optimally biased coin.

Derandomize.

P-Optimal.

tG = critical transition point

easy (fluid)

Polynomial

0

due to Lieberherr/Specker (1979, 1982)

- Ladner [Lad 75]: if P !=NP, then there are decision problems in NP that are neither NP-complete, nor they belong to P.
- Conceivable that MAX-CSP(G,f) contains problems of intermediate complexity.

MAX-CSP(G,f): For each finite set G of relations

there exists an algebraic number tG

For f≤ tG: MAX-CSP(G,f) has polynomial solution

For f≥ tG+ e: MAX-CSP(G,f) is NP-complete, e>0.

1

hard (solid), NP-complete

exponential, super-polynomial proofs ???

relies on clause learning

tG = critical transition point

easy (fluid), Polynomial (finding an assignment)

constant proofs (done statically using look-ahead polynomials)

no clause learning

0

Two players: They agree on a protocol P1 to choose a set of m relations of rank r.

- The players use P1 to choose a set G of m relations of rank r.
- Player 1 constructs a CSP(G) instance H with 1000 variables and gives it to player 2 (1 second limit).
- Player 2 gets paid the fraction of constraints she can satisfy in H (100 seconds limit).
- Take turns (go to 1).

- Rank 3: Represent relations by the integer corresponding to the truth table in standard sorted order 000 – 111.
- choose relations between 1 and 254 (exclude 0 and 255).
- Don’t choose two odd numbers: All false would satisfy all constraints.
- Don’t choose both numbers above 128: All true would satisfy all constraints.

For

Evergreen(3,2)

sat(H,M) = fraction of satisfied constraints in

CSP(G)-instance H by assignment M

tG = min max sat(H,M)

all CSP(G)

instances H

all (0,1) assignments M

- Solution to simpler problem implies solution to original problem.

sat(H,M,n) = fraction of satisfied constraints in

CSP(G)-instance H by assignment M with n variables.

tG = lim min max sat(H,M,n)

all SYMMETRIC

constraint

systems H with

n variables

all (0,1) assignments M

to n variables

n to

infinity

- Instead of minimizing over all constraint systems it is sufficient to minimize over the symmetric constraint systems.

- Symmetric case is the worst-case: If in a symmetric constraint system the fraction f of constraints can be satisfied, then in any constraint system the fraction f can be satisfied.

n variables

n! permutations

If in the big system the

fraction f is satisfied,

then there must

be a least one small system

where the fraction f is satisfied

.

.

sat(H,M,n) = fraction of satisfied constraints in

system S by assignment I

tG = lim min max sat(H,M,n)

all SYMMETRIC

constraint

systems H with

n variables

all (0,1) assignments M

to n variables where the

first k variables are set

to 1

n to

infinity

- The look-ahead polynomial look-forward approach has not been used in state-of-the-art MAX-SAT and Boolean MAX-CSP solvers.
- Often a fair coin is used. The optimally biased coin is often significantly better.

How the look-ahead

polynomial depends

on its context, the

currently best

assignment.

N0 ={!v1,!v2,!v3,!v4}

N0‘ ={v1,!v2,!v3,!v4}

- G = all relations used in SAT (Or)
- tG = ½ (easy)
- 2-satisfiable (disallow A and !A for any A):
tG=(sqrt(5)-1)/2

- G = {R0,R1,R2,R3}; Rj : rank 3, exactly j of 3 variables are true. tG= ¼

- G(p,q) = {Rp,q = disjunctions containing at least p positive or q negative literals (p,q≥1)}
- Let a be the solution of (1-x)p=xq in (0,1). tG(p,q)=1-aq

14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0

14: 1 2 = or(1 2)

7: 1 3 = or(!1 !3)

What is the look-ahead

polynomial?

excellent peripheral vision

Blurry vision

- What do we learn from the abstract representation?
- set 1/3 of the variables to true (maximize).
- the best assignment will satisfy at least 7/9 constraints.
- very useful but the vision is blurred in the “middle”.

appmean = lookahead is an approximation of the true mean

- Thank You

- Introduction
- Look-forward
- Look-back
- SPOT: how to use the look-ahead polynomials with superresolution

- Look-forward based on look-ahead polynomials
- value-ordering
- variable-ordering

- Look-backward
- superresolution
- many different learning schemes developed by SAT community (different cuts of the implication graph)

- superresolution

SPOT defines a family

of solvers that rely on

look-ahead polynomials

and (optimized) superresolvents.

- Given an assignment N which satisfies fraction f.
- Is there an assignment that satisfies more than f?
- YES (we are done), laH,N(mb) > f
- MAYBE, The closer laH,N() comes to f, the better

- Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f
- YES (we are done), H1 = UP*(Hk=1,N), laH1,N(mb1) > f
- MAYBE, the closer laH1,N() comes to f, the better
- NO, UP or clause learning

- Is there an assignment that satisfies more than f?

UP*(F,M) : apply UP as often as possible after applying assignment M to F

The problem: MAYBE happens

frequently, especially when f is close to 1.

- Given is F and currently best assignment N.
- H1 = UP*(Hx=1,N)
- H0 = UP*(Hx=0,N)
- Choose x = 1, if laH1,N(mb1) ≥ laH0,N(mb0)

UP*(F,M) : apply UP as often as possible after applying assignment M to F

- Reduction: Hk=d (d=0,1; k a literal)
- n-map(H,k)
- connection:
- abs((n-map(H,k)k=d)= abs(Hk=!d)
- abstract representation can achieve maximum either by repeated reductions or by repeated n-maps.

- connection:

- How to use the look-ahead polynomials
- Choose top k (number of true variables).
- Choose among top 5 (4 is the winner).

2

1

4 3 5

- There is a member U of the SPOT family of solvers:
- U finds a maximum assignment “quickly”.
- But U spends a long time proving that it is the maximum assignment.

- Stopping rule problem.

- There is a member U of the SPOT family of solvers:
- U finds the maximum assignment after at most |F|c superresolution steps where c is a constant.
- Any superresolution proof for maximality is probably superpolynomial.

only one helper: superresolvents

stopping rule problem!

1

percentage

satisfied

maximum

look-ahead polynomials become

totally useless !?!

tG

two helpers:

1. look-ahead polynomial

2. superresolvents

random

assignment N

0

only one helper:

look-ahead polynomial

number of tries (proof steps)

only one helper: superresolvents

stopping rule problem!

1

percentage

satisfied

maximum

look-ahead polynomials become

totally useless !?! symmetric instance

laF,N(mb)

two helpers:

1. look-ahead polynomial

2. superresolvents

random

assignment N

0

only one helper:

look-ahead polynomial

number of tries (proof steps)

Some fast MAX-CSP solver MC

1

percentage

satisfied

maximum

How often does this happen in practice:

MC has to search using clause learning,

while the look-ahead polynomial can construct

a better assignment without search.

Intuition: the better the assignment N1, the

less likely it is that the look-ahead

polynomial improves N1.

laF,N1(mb)

random

assignment N

N1

0

number of tries (proof steps)

- New: Superresolution for MAX-CSP
- New: Integration of look-ahead polynomials with superresolution
- Old: Superresolution for SAT (1977)
- Old: Look-ahead polynomials (1983)

- Rich literature on clause learning in SAT and CSP solver domain. Superresolution is the most general form of clause learning with restarts.
- Papers on look-ahead polynomials and superresolution: http://www.ccs.neu.edu/research/demeter/papers/publications.html

- Useful unpublished paper on look-ahead polynomials: http://www.ccs.neu.edu/research/demeter/biblio/partial-sat-II.html
- Technical report on the topic of this talk: http://www.ccs.neu.edu/research/demeter/biblio/POptMAXCSP.html

- Exploring best combination of look-forward and look-back techniques.
- Find all maximum-assignments or estimate their number.
- Robustness of maximum assignments.
- Are our MAX-CSP solvers useful for reasoning about biological pathways?

- Presented SPOT, a family of MAX-CSP solvers based on look-ahead polynomials and non-chronological backtracking.
- SPOT has a desirable property: P-optimal.
- SPOT can be implemented very efficiently.
- Preliminary experimental results are encouraging. A lot more work is needed to assess the practical value of the look-ahead polynomials.

appmean is an approximation of the true mean

The Evergreen Project:How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving

Karl J. Lieberherr

Northeastern University

Boston

joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

MAX-CSP:Superresolution and P-Optimality

Karl J. Lieberherr

Northeastern University

Boston

joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

x1 + x2 + x3 = 1

x1 + x2 + + x4 = 1 can satisfy 6/7

x1 + x3 + x4 = 1

x1 + x3 + x4 = 1

x1 + x2 + + x5 = 1

x1 + x3 + x5 = 1

x2 + x3 + x5 =1

- Unit-Propagation (UP):
M || F || SR || N → Mk || F || SR || N

- if k is undefined in M, and
- unsat (SR,M¬k) > 0 or unsat(F,M¬k) ≥ unsat(F,N).

- Decide (D):
M || F || SR || N → Mkd || F || SR || N

- if k is undefined in M, and
- v(k) occurs in some constraint of F.

- Update:
M || F || SR || N → M || F || SR || M

- if M is complete, and
- unsat(F,M) < unsat(F,N).

- Restart:
M || F || SR || N → { } || F || SR || N

- Finale:
M || F || SR || N → M || F || SR || N

- if Φ SR or unsat(F,N) = 0.

- Semi-Superresolution (SSR):
NewSR = V (¬k), where k Md

M || F || SR || N → M || F || SR, NewSR || N

- if unsat(SR,M) > 0 or unsat(F,M) ≥ unsat(F,N).

- Introduction
- Look-forward
- Look-back
- Packed Truth Tables
- SPOT: how to use the look-ahead polynomials

- The look-ahead polynomial can be computed efficiently. Requires efficient truth table analysis.
- Reduction of an instance must be efficient.
- Efficiently compute the forced variables.
- Each relation has a unique representation.

22 254

int isForced(int variablePosition)

boolean isIrrelevant(int variablePosition)

int nMap(int variablePosition)

int numberOfRelevantVariables()

int q(int s)

int reduce(int variablePosition, int value)

int rename(int permutationSemantics, int... permutation)

- Lieberherr 1977:
- edge from l1 to l2 is labeled by the set of already forced literals L so that l1 union L forces l2 because of a clause C.

- Beame 2004 (now the standard, due to Marques-Silva & Sakallah, 1996)
- edge from l1 to l2 is labeled by clause C. l1 is responsible for forcing l2 because of clause C.

The Evergreen Project:Assessing the Guidanceof Look-Ahead Polynomials in MAX-CSP Solving

Karl J. Lieberherr

Northeastern University

Boston

joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

- Introduction
- Look-forward
- Look-backward
- SPOT: how to use the look-ahead polynomials

- Why?
- to avoid past mistakes

- How?
- Transition system based on superresolution.
- Superresolution was first introduced for SAT, now we generalize it for MAX-CSP.

- Optimally biased coin technique based on look-ahead polynomials is “best-possible”.
- If we could improve it by a trillionth in polynomial time, then P=NP.
- We improve it now by learning new constraints that will influence the polynomial.

- Let’s go beyond what an optimally biased coin guarantees!
- Goal: satisfy the maximum number of constraints.
- Approach: Superresolution.
- When to apply: number of constraints guaranteed to be unsatisfied doesn’t decrease
- A mistake is made.

- Who to blame: a subset of the decision literals
- They are the culprits.

- How to penalize: add the disjunctions of their negations as a superresolvent
- The gang of culprits is watched.

- When to apply: number of constraints guaranteed to be unsatisfied doesn’t decrease

- Unit-Propagation (UP):
M || F || SR || N → Mk || F || SR || N

- if k is undefined in M, and
- unsat (SR,M¬k) > 0 or unsat(F,M¬k) ≥ unsat(F,N).

old mistake(M¬k) new mistake(M¬k)

mistake(M) = old mistake(M) or new mistake(M)

- Semi-Superresolution (SSR):
NewSR = V (¬k), where k Md

M || F || SRs || N → M || F || SRs, NewSR || N

- if unsat(SR,M) > 0 or unsat(F,M) ≥ unsat(F,N).

old mistake(M) new mistake(M)

mistake(M) = old mistake(M) or new mistake(M)

- Superresolution (SR): 1977
M || F || SRs || N → M || F || SRs, Common || N

- if there exists a literal k so that by SSR applied twice:
- NewSR=Common, k
- NewSR=Common, !k

- if there exists a literal k so that by SSR applied twice:

Notes: Note that Common is a resolvent.

Superresolution is the mother of clause learning: other clause

learning schemes learn clauses implied from superresolvents

by UnitPropagation.

Resolution and Superresolution are polynomially equivalent (1977,

Beame et al. (2004)).

- Mother of clause learning: minimal elements of learned clauses
- But from superresolution to making clause learning a suitable and efficient technique in SAT and CSP and MAX-CSP solvers there is a long way

NewSR is minimal

UP*(F,M) : apply UP as often as possible after applying M to F

- Opt-Semi-Superresolution (OSSR):
NewSR = V (¬k), where kєM’ subset Md

M || F || SRs || N → M || F || SRs, NewSR || N

- if mistake(M) and not newM(F,M*), for all M* where M* is M’ with one literal deleted.

oldM(M) = unsat(SR,M)>0newM(F,M) = unsat(UP*(F,M),M) ≥ unsat(F,N)

mistake(M) = oldM(M) or newM(F,M)

- Not all decision literals may be responsible for the “mistake”.
- Want to find a minimal superresolvent so that deleting one literal would destroy the superresolvent property.
- Can be implemented by a traversal back the implication graph that is built as part of unit propagation.

- Can be implemented by a traversal back the implication graph that is built as part of unit propagation.

v

k1

k2

k3

k6

w

k8

k4

k5

k7

!k8

- start with an arbitrary assignment N.
- while (proof incomplete) {
- try to improve N by creating new assignment from scratch using optimally biased coin to flip the assignments;
- success: Update N;
- failure: learn a new constraint that will prevent same mistake and will “improve” the polynomial. }

- try to improve N by creating new assignment from scratch using optimally biased coin to flip the assignments;

- TS finds the maximum in an exponential number of steps.
- It creates a polynomially checkable proof that we indeed found the maximum.