1 / 63

Time-Space Tradeoffs in Resolution: Lower Bounds for Superlinear Space

Time-Space Tradeoffs in Resolution: Lower Bounds for Superlinear Space. Chris Beck Princeton University Joint work with Paul Beame & Russell Impagliazzo. Resolution. Lines are clauses, one simple proof step Three basic flavors: Tree-like , Regular , and General Resolution.

carson
Download Presentation

Time-Space Tradeoffs in Resolution: Lower Bounds for Superlinear Space

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Time-Space Tradeoffs in Resolution:Lower Bounds for Superlinear Space Chris Beck Princeton University Joint work with Paul Beame & Russell Impagliazzo

  2. Resolution • Lines are clauses, one simple proof step • Three basic flavors: Tree-like, Regular, and General Resolution

  3. Proof DAG “Regular”:On every root to leaf path, no variable resolved more than once.

  4. SAT Solvers • Well-known connection between SAT solvers based on Backtracking and Resolution. • These algorithms are very powerful – historically very successful in SAT competitions, sometimes can quickly handle CNF’s with millions of variables. • On negative SAT instances, computation history yields a Resolution proof. • Tree-like Resolution DPLL • General Resolution DPLL + Clause Learning, e.g. Chaff & descendants

  5. SAT Solvers • But while a useful tool, not a panacea for complexity. In many practical situations, Memory becomes the limiting factor. Frequently run into problems with Memory consumption leading to unacceptable runtimes. • Question: Is this inherent? Or can the right heuristics avoid the memory bottleneck?

  6. Connection with Resolution • Proof SizeTime for Ideal SAT Solver • Proof SpaceMemory for Ideal SAT Solver • In the past, much success in finding explicit hard examples, with exponential lower bounds for Resolution Proof Size. • Question: Can we repeat this success for Proof Space?

  7. Connection with Resolution • Question: Can we get strong lower bounds for Proof Space? • Known Results: • Much success with lower bounds for space for explicit tautologies [ET’99, ABRW’00, T’01, AD’03, N’06, NH’08, BN’08] • Every tautology on variables has tree-like refutation of space. [Esteban, Torán ‘99] • However, these tree-like refutations are generally impractical, because of their large size.

  8. Size-Space Tradeoffs • [Ben-Sasson, ‘01] Results for Tree-like resolution, tradeoffs essentially aren’t possible in this model. • [Nordström, ‘09] Formulas with min-variable space refutation exponentially large, but with O(1) more space a linear size refutation. • [Ben-Sasson, Nordström ‘10] Formula which can be refuted in SizeO(n) with SpaceO(n), but SpaceO(n/log n) Size exp(n(1)). Uses graph pebbling tautologies, and variations. But, these are all for Space , and SAT solvers generally can afford to store the input formula in memory.

  9. Size-Space Tradeoffs • Informal Question: Can we find formulas such that small space proofs are much larger, so that it is impractical to SAT solve with small memory? • Formal Question (Ben-Sasson): “Does there exist a such that any CNF with a refutation of length also has a refutation of length in spaceO?” • Theorem (this work): Certain Tseitin Tautologies of size n have refutations in Time and Spacenlog n, but all refutations have TS > n1.16 log n,so > 1.16.

  10. Tradeoffs for Regular Resolution • Theorem (this work): For any k, certain Tseitin Tautologies of size nsuch that • Have regular refutationsin Timenk+1, Spacenk. • But with Space only nk-, for any  > 0, any regular refutation has Time at least n log log n / log loglogn. • So, for Regular Resolution, we can give a negative answer to Ben-Sasson’s question.

  11. Space in Resolution • Clause space. [Esteban, Torán ‘99] • More conservative than variable space, bit space. … Must be in memory Time step

  12. Tseitin Tautologies Given an undirected graph and , define a CSP: Boolean variables: Parity constraints: When has odd total parity, CSP is unsat.

  13. Tseitin Tautologies • When  odd, G connected, corresponding CNF is a Tseitin Tautology. [Tseitin ‘68] • Only total parity of matters • Hard when G is a constant degree expander: [Urqhart 87]: Resolution size.[Torán 99]: Resolution space • This work: Tradeoffs on grid,, and similar graphs, using isoperimetry.

  14. Graph • Take as our graph and form the Tseitin tautology, • We’ll take but it’s only important that it is • Formula size .

  15. A Refutation • Tseitintautologies can be viewed as a system of inconsistent -linear equations. If we add them all together, get 1=0, contradiction. • If we order the vertices (equations) intelligently (column-wise), then we never talk about more than variables at any one timein this derivation. • Implicational completeness means Resolution can simulate this – yields Size, Space

  16. A Different Refutation • Can also do a “binary search” refutation. Idea is to repeatedly bisect the graph and case out on values of edges in the cut. • Once we split the CNF into two pieces, can discard one of them based on parity of cut. • After queries, we’ve found a violated input clause – idea yields tree-like proof with Space, Size(Savitch-like savings)

  17. Complexity Measure • Say an assignment to an (unsat) CNF is a critical if it violates only one constraint. • For to Tseitin formula , “’s vertex”: • For any Clause , define the “critical vertex set”: Observations: Bad is monotonic decreasing.Bad() = V iffG is connected &  is odd.Bad() is empty or a component of G\dom . vertex of that constraint

  18. Critical Set Examples Blue = 0Red = 1 n l  function: one odd vertex in corner graph = Grid For the empty assignment, Critical set is everything.

  19. Critical Set Examples Blue = 0Red = 1 n l For an assignment that doesn’t cut the graph, Critical set is … still everything.

  20. Critical Set Examples Blue = 0Red = 1 n l For this assignment, several components. Assignment is all zeros, Upper left is critical.

  21. Critical Set Examples Blue = 0Red = 1 n l

  22. Complexity Measure • Define . Then is a subadditive complexity measure: • , • , • , when • Very Useful: Every edge on the boundary of is assigned by . If graph has an isoperimetric inequality, then “medium complexity” clauses are wide.

  23. Complexity vs. Time • Consider the time ordering of any proof, and plot complexity of clauses in memory vs. time • Only constraints – start low, end high, and because of subadditivity, cannot skip over any [t, 2t] window of μ-values on the way up. ⊥ μ Time Input clauses

  24. Complexity vs. Time • Consider the time ordering of any proof, and divide time into equal epochs (fix later) Hi Med Low Time

  25. Two Possibilities • Consider the time ordering of any proof, and divide time into equal epochs (fix later) • Either, a medium complexity clause appears in memory at at least one of the breakpoints between epochs, Hi Med Low Time

  26. Two Possibilities • Consider the time ordering of any proof, and divide time into equal epochs (fix later) • Or, all breakpoints only have Hi and Low. Must have an epoch which starts Low ends Hi, and therefore has clauses of every Medium level. Hi Med Low Time

  27. Medium Sets have large Boundary Claim: For any . Proof:If has at least partially full columns, each gives edges as desired. Without loss there is at least one full column, and at least one empty, since at most half are full, only a few are partial.

  28. Medium Sets have large Boundary Claim: For any . Proof:Suppose S has a full column and an empty column. For any two columns, has edge disjoint paths between them. QED.

  29. Intuition • Bottleneck Counting intuition: Since medium clauses are wide, they don’t do much work. • A random assignment has chance to falsify one, so if first scenario is not very significant. Hi Med Low Time

  30. Idea: Random Restrictions • A restriction is a partial assignment of truth values to variables, simplifying formulas. • a CNF, yields restricted formula • a proof of , yields restricted refutation of size, space don’t increase. • Typical to choose a restrictionrandomly, esp. when using bottleneck counting intuition.

  31. Intuition • Time divided into epochs • If we apply a random restriction, and is small, then typically first scenario will not occur in restricted proof. Hi Med Low Time

  32. Intuition • Time divided into epochs • But if is also small, then this scenario is also unlikely in the restricted proof. This part is one of our primary technical contributions. Hi Med Low Time

  33. Extended Isoperimetric Inequality For both medium, have Two medium sets of different sizes  boundary guarantee doubles. Extends: medium sets of superincreasing sizes.

  34. Restrictions We show, for a random restriction w that for any clauses ,  2nd scenario is unlikely if epochs are small

  35. Analysis • Restricted proof, time divided into epochs • Bound probability of first as Hi Med Low Time # epochs # clauses in memory Pr for clause to become medium

  36. Analysis • Restricted proof, time divided into epochs • Bound second as: ( # medium levels) Hi Med Low Time Pr for k clauses to become medium And have sizes a factor of 2 different # k-tuples per epoch

  37. Time Space Tradeoff • Taking • So since , 

  38. Regular Resolution • Can define partial information precisely • Complexity is monotonic wrt proof DAG edges. • Random Adversary selects random assignments based on proof • No random restrictions • Precise information implies can apply division into epochs recursively Yields superpolynomial lower bound

  39. Open Questions • Extend subdivision technique to General Resolution? This would settle Ben-Sasson’s question. • Better bounds on his constant? A lower bound greater than 2 for General Resolution will require significant new technical insight. • Other proof systems? • Other cases for separating search paradigms:“dynamic programming” vs. “binary search”?

  40. Thanks!

  41. Regular Resolution • Equivalence between Regular Resolution refutations and Read-Once Branching Program for the Clause Search Problem. [Krajicek] • Permits nice top-down analysis. Definition: Common Information: For a clause in proof , := largest partial assignment consistent with every assignment reaching .

  42. Regular Resolution • Consistency Lemma: If a partial assignment leads to , agrees with on . • Corollary: If a partial assignment leads to ,. • Idea: Build with a probabilistic strategy, try to keep large but also make many random decisions when not affected. • Hope: Probability to reach complex is small.

  43. Definition: Adversary Strategy A Probabilistic Adversary, following a path from the root  to the leaves of the proof. • Remembers answers already given, thought of as a partial assignment . • If new edge is queried, (resolved on) • Doesn’t cut  choose randomly • Does cut  choose to maximize bad set

  44. Adversary Examples Blue = 0Red = 1 W ? n Adversary flips coins for non-cut edges.

  45. Adversary Examples Blue = 0Red = 1 W n For this cut edge, he chooses blue to maximizethe critical set.

  46. Adversary Examples Blue = 0Red = 1 W ? n This edge cuts the critical set, he keeps bigger half.

  47. Adversary Examples Blue = 0Red = 1 W ? n Adversary therefore chooses to keep the crit set. This edge cuts a “good” set. If assigned wrong,crit set becomes empty, b/c multiple odd comps.

More Related