1 / 28

The PCP Theorem via gap amplification

The PCP Theorem via gap amplification. Irit Dinur Hebrew University. P rob. C heckable. P roof. The PCP Theorem [AroraSafra, AroraLundMotwaniSudanSzegedy, 1992]. If sat(  ) = 1 then 9 proof , Pr[ Ver accepts] = 1 If sat(  ) < 1 then 8 proof , Pr[ Ver accepts] < ½. Verifier.

keene
Download Presentation

The PCP Theorem via gap amplification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The PCP Theorem via gap amplification Irit Dinur Hebrew University

  2. Prob.Checkable.Proof The PCP Theorem [AroraSafra, AroraLundMotwaniSudanSzegedy, 1992] • If sat() = 1 then 9proof, Pr[Ver accepts] = 1 • If sat() < 1 then 8proof, Pr[Ver accepts] < ½ Verifier SAT Instance: 

  3. variables … V1 V2 V3 Vn x3 V1 x4 V2 x1 x2 xn x5 The PCP Theorem[AroraSafra, AroraLundMotwaniSudanSzegedy, 1992] • PCP Thm <--> reduction from SAT to gap-CSP • Given a constraint graph G, it is NP hard to decide between • gap(G)=0 • gap(G)>

  4. This talk • New proof for the PCP Theorem: • Given a constraint graph G, it is NP-hard to decide between • gap(G) = 0 • gap(G) >  • Based on: gap amplification, inspired by Reingold’s SL=L proof • Also: “very” short PCPs

  5. step 0: Constraint Graph SAT is NP-hard • Given a constraint graph G, it is NP-hard to decide if gap(G)=0 or gap(G) > 0 • proof: reduction from 3coloring. • ={1,2,3}, inequality constraints on edges. • Clearly, G is 3-colorable iff gap(G)=0. • PCP Thm: Given a constraint graph G, it is NP-hard to decide if gap(G)=0 or gap(G) > 

  6. Basic Plan • Start with a constraint graph G (from 3coloring) • G G1  G2 … Gk= final output of reduction • Main Thm: gap(Gi+1) ¸ 2 ¢ gap(Gi) (if not already too large) • size(Gi+1) = const ¢ size(Gi), degree, alphabet, expansion all remain the same. • Conclusion: NP hard to distinguish between gap(Gk)=0 and gap(Gk)>  (constant)

  7. powering Main Step • Gi  Gi+1 : Gi+1 = ( prep(Gi) )t²P • Preprocess G • Raise to power t • Compose with P = constant size PCP • Key step: G  Gt, multiplies the gap by t; Keeps size linear ! Standard transformations: making G a regular constant degree expander, w/ self-loops Composition with P = a “constant-size” PCP. P can be as inefficient as possible

  8. u v Powering a constraint graph • Vertices: same • Edges: length-t paths (=powering of adj. matrix) • Alphabet: dt reflecting “opinions” about neighbors • Constraints: check everything you can!

  9. u v Powering a constraint graph • Vertices: same • Edges: length-t paths (=powering of adj. matrix) • Alphabet: dt reflecting “opinions” about neighbors • Constraints: check everything you can! • Observations: • New Degree = dt • New Size = O(size) (#edges is multiplied by dt-1) • If gap(G)=0 then gap(Gt)=0 • Alphabet increases from  to dt • Amplification Lemma: gap(Gt) ¸t¢ gap(G)

  10. Amplification Lemma: gap(Gt) > t¢ gap(G) • Intuition: Spread the information  inconsistencies will be detected more often Assumption: G is d-regular d=O(1), expander, w self-loops u v

  11. Amplification Lemma: gap(Gt) > t¢ gap(G) Given A:V  dt “best” extract a:V   by most popular value in a random t/2 step walk v

  12. Extracting a:V Given A:V  dt “best” extract a:V   by most popular value in a random t/2 step walk v u

  13. Extracting a:V Given A:V  dt “best” extract a :V   and consider F = { edges rejecting a } Note: F/E ¸ gap(G) v u

  14. Amplification Lemma: gap(Gt) > t¢ gap(G) • Relate fraction of rejecting paths to fraction of rejecting edges ( = F/E ) v u

  15. Two Definitions v • = (v0,v1,…,u,v,…,vt);j = (vj-1,vj) • Definition: the j-th edge strikes if • |j – t/2| < t • (u,v)2F, i.e., (u,v) rejects a(u), a(v) • A(v0) agrees with a(u) on u & A(vt) agrees with a(v) on v . • Definition: N() = # edges that strike  . • 0 · N() < 2t • If N()>0 then  rejects, so gap(Gt) ¸ Pr[N()>0] u vt v0 j 

  16. gap(Gt) ¸ ¸ gap(G) We will prove: Pr[N()>0] > t¢F/E • Lemma 1: E[N] > t¢F/E ¢const(d, ) • Intuition: Assuming N() is always 0 or 1, Pr[N>0] = E[N] • Lemma 2: E[N2] < t¢F/E ¢const(d, ) • Standard: Pr[N>0] ¸ (E[N])2/E(N2) pf: E[N2|N>0]¢Pr[N>0]2¸(E[N|N>0])2¢Pr[N>0]2  Pr[N>0] > (t¢F/E )2 / (t¢F/E) = t¢F/E

  17. v u vt v0 t-i i-1 Lemma 1: E[N] = t ¢ F/E • Ni() = indicator for event “the i-th edge strikes ” • N = i2JNi where J = { i : |i-t/2|< t } • Claim: if i 2 J  E[Ni] ¼ 1/2¢ F/E •  can be chosen by the following process: • Select a random edge (u,v)2E, and let i = (u,v). • Select a random i-1 step path from u • Select a random t-i step path from v • Clearly, Pr[i2F] = F/E • What is the probability that A(v0) agrees with a(u) and A(vt) agrees with a(v) ?

  18. Claim: if i 2 J  E[Ni] ¼ 1/2¢ F/E •  chosen by : • Select a random edge (u,v)2E, and let i = (u,v). • Select a random i-1 step path from u • Select a random t-i step path from v • i-1 = t/2  walk from u reaches v0 for which A(v0) thinks a(u) of u, with prob. ¸ 1/. • i 2 J: roughly the same !! (because of self-loops) v u vt v0 t-i i-1 t/2

  19. v Back to E[N] u vt v0 t-i i-1 • Fix i2J. Select  by the following process: • Select a random edge (u,v), and let i = (u,v). • Select a random i-1 step path from u • Select a random t-i step path from v • Pr[i2F] = F/E • Pr[A(v0) agrees with a on u | (u,v) ] > 1/2 • Pr[A(vt) agrees with a on v | (v0,…,u,v) ] > 1/2 • E[Ni] = Pr[Ni=1] > F/E ¢ 1/2 ¢const • so E[N] = i2JE[Ni] >t¢F/E ¢const QED

  20. gap(Gt) ¸ ¸ gap(G) We will prove: Pr[N()>0] > t¢F/E • Lemma 1: E[N] > t¢F/E ¢const(d, ) • Lemma 2: E[N2] < t¢F/E ¢const(d, ) read: “most striked paths see · a constant number of striking edges” By Pr[N>0] > (E[N])2 / E[N2]  Pr[N>0] > (t¢F/E )2 / (t¢F/E) = t¢F/E

  21. Lemma 2: Upper bounding E[N2] • Observe: N() · # middle intersections of  with F • Claim: if G=(V,E) is an expander, and F½E any (small) fixed set of edges, then E[(N’)2] < t¢F/E¢(t¢F/E+const) • proof-sketch: Compute i<jE[N’iN’j]. Conditioned on i2 F, the expected # remaining steps in F is still · constant.

  22. The full inductive step • Gi  Gi+1 : Gi+1 = ( prep(Gi) )t²P • Preprocess G • Raise to power t • Compose with P = constant size PCP

  23. Preprocessing G  H=prep(G) s.t. • H is d-regular, d=O(1) • H is an expander, has self-loops. maintain • size(H) = O(size(G)) • gap(G) ¼ gap(H), i.e., • gap(G) = 0  gap(H) = 0 • gap(G)/const · gap(H) [PY] Blow up every vertex u into a cloud of deg(u) vertices, and inter connect them via an expander. Add expander edges Add self-loops

  24. C1 C2 C3 C4 Cn P P c11 c12 c13 c14 c15 cn1 cn2 cn3 cn4 cn5 Reducing dt to  • Consider the constraints {C1,…,Cn} (and forget the graph structure) • For each i, we replace Ci by {cij} = constraints over smaller alphabet . • P = algorithm that takes C to {cj}, cj over ,s. t. • If C is “satisfiable”, then gap({cj})=0 • If C is “unsatisfiable”, then gap({cj}) > 1/2 • Composition Lemma: [BGHSV, DR] The system C’ = [iP(Ci) has gap(C’) ¼ gap(C) Assignment-testers [DR] / PCPPs [BGHSV]

  25. Composition • If P is any AT / PCPP then this composition works. P can be • Hadamard-based • Longcode-based • found via exhaustive search (existence must be ensured, though) • P’s running time only affects constants.

  26. Summary: Main theorem • Gi  Gi+1 : Gi+1 = ( prep(Gi) )t²P • gap(Gi+1) > 2¢gap(Gi) and other params stay same • G [, ] • G  prep(G) [, /const] • G  Gt [dt, t¢ /const] • G  G ²P[, t¢ /const’] = [,2] • G=G0  G1  G2 … Gk= final output of reduction • After k=log n steps, • If gap(G0) = 0 then gap(Gk)=0 • If gap(G0) > 0 then gap(Gk) > const

  27. Application: short PCPs • …[PS, HS, GS, BSVW, BGHSV, BS] • [BS’05]: NP µ PCP1,1-1/polylog[ log (n¢polylog ), O(1) ] • There is a reduction taking constraint graph G to G’ such that • |G’| = |G|¢ polylog |G| • If gap(G)=0 then gap(G’)=0 • If gap(G)>0 then gap(G’)> 1/polylog|G| • Applying our main step loglog|G| times on G’, we get a new constraint graph G’’ such that • If gap(G) = 0 then gap(G’’)=0 • If gap(G) > 0 then gap(G’’) > const • i.e., NP µ PCP1,1/2[ log (n¢polylog ), O(1) ]

  28. final remarks • Main point: gradual amplification • Compare to Raz’s parallel-repetition thm • Q: get the gap up to 1-o(1)

More Related