1 / 30

Unconditional Weak derandomization of weak algorithms Explicit versions of Yao ’ s lemma

Unconditional Weak derandomization of weak algorithms Explicit versions of Yao ’ s lemma. :. Ronen Shaltiel, University of Haifa. Derandomization: The goal. Main open problem: Show that BPP=P. (There is evidence that this is hard [IKW,KI]) More generally:

kizzy
Download Presentation

Unconditional Weak derandomization of weak algorithms Explicit versions of Yao ’ s lemma

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UnconditionalWeak derandomization of weak algorithmsExplicit versions of Yao’s lemma : Ronen Shaltiel, University of Haifa

  2. Derandomization: The goal Main open problem: Show that BPP=P. (There is evidence that this is hard [IKW,KI]) More generally: Convert: randomized algorithm A(x,r) into: deterministic algorithm B(x) We’d like to: • Preserve complexity: complexity(B) ≈ complexity(A) (known BPPEXP). • Preserve uniformity: transformation A → B is “explicit” (known BPPP/poly). n bit long input m bit long “coin tosses”

  3. Strong derandomization is sometimes impossible • Setting: Communication complexity. • x=(x1,x2)where x1,x2 are shared between two players. • Exist randomized algorithms A(x,r) (e.g. for Equality) with logarithmic communication complexity s.t. any deterministic algorithm B(x) requires linear communication. • Impossible to derandomize while preserving complexity.

  4. (The easy direction of) Yao’s Lemma: A straight forward averaging argument Given randomized algorithm that computes a function f with success 1-ρ on the worst case, namely: Given A:{0,1}n£{0,1}m →{0,1}s.t. ∀x: PrR←Um[A(x,R)=f(x)]¸1-ρ r’2{0,1}m s.t. the deterministic algorithm B(x)=A(x,r’) computes f well on average, namely: PrX←Un[B(X)=f(X)]¸1-ρ • Useful tool in bounding randomized algs. • Can also be viewed as “weak derandomization”.

  5. Advantages Applies to any family of algorithms and any complexity measure. Communication complexity. Decision tree complexity. Circuit complexity classes. Construction: B(x)=A(x,r’) preserves complexity. e.g. if A has low communication complexity then B has low communication complexity. Drawbacks Weak derandomization is weak: deterministic alg B succeeds on most but not all inputs. Let’s not be too picky. In some scenarios (e.g. communication complexity) strong derandomization is impossible. The argument doesn’t give an explicit way to find r’ and produce B(x)=A(x,r’). Uniformity is not preserved: Even if A is uniform we only get that B(x)=A(x,r’) is nonuniform (B is a circuit). Yao’s lemma as “weak derandomization”

  6. The goal:Explicit versions of Yao’s Lemma Given randomized algorithm that computes a function f with success 1-ρ on the worst case, namely: Given A:{0,1}n£{0,1}m →{0,1} s.t. ∀x: PrR←Um[A(x,R)=f(x)]¸1-ρ Give explicit construction of a deterministic algorithm B(x) s.t.: • B computes f well on average, namely: PrX←Un[B(X)=f(X)]¸1-ρ • Complexity is preserved: complexity(B) ≈ complexity(A). • We refer to this as “Explicit weak derandomization”.

  7. Adelman’s theorem (BPPP/poly)follows from Yao’s lemma Given randomized algorithm A that computes a function f with success 1-ρ on the worst case. • (amplification) amplify success prob. to 1-ρ for ρ=2-(n+1) • (Yao’s lemma)  deterministic circuit B(x) such that: b := PrX←Un[B(X)≠f(X)]<ρ<2-(n+1)⇒b=0 ⇒Bsucceeds on all inputs. Corollary: Explicit version of Yao’s lemma for general poly-time algorithms ⇒BPP=P. Reminder of talk: Explicit versions of Yao’s lemma for “weak algorithms”: Communication games, Decision trees, Streaming algorithms, AC0 algorithms.

  8. Related work: Extracting randomness from the input Idea [Goldreich and Wigderson]: Given a randomized alg A(x,r) s.t. |r|·|x| consider the deterministic alg: B(x)=A(x,x). Intuition: If input x is chosen at random then random coins r:=x is chosen at random. Problem: Input and coins are correlated. (e.g. consider A s.t. 8input x, coin x is bad for x). GW: Does work if A has the additional property that whether or not a coin toss is good does not depend on the input. GW: It turns out that there are A’s with this property.

  9. The role of extractors in [GW] In their paper Goldreich and Wigderson actually use: B(x)=maj seeds yA(x,E(x,y)) Where E(x,y) is a “seeded extractor”. Extractors are only used for “deterministic amplification” (that is to amplify success probability). Alternative view of the argument: • Set A’(x,r)=maj seeds yA(x,E(r,y)) • Apply construction B(x)=A’(x,x).

  10. Randomness extractors Do we have to tell that same old story again? Daddy, how do computers get random bits?

  11. C is a class of distributions over n bit strings “containing”k bits of (min)-entropy. A deterministic (seedless) C-extractor is a function E such that for every XєC, E(X) is ε-close to uniform on m bits. A seededextractor has an additional (short i.e. log n) independent random seed as input. For Seeded extractors C={all X with min-entropy ¸k} Extractor seed random output Randomness Extractors: Definition and two flavors source distribution from C Seeded Deterministic • A distribution X has min-entropy≥ k if ∀x: Pr[X=x] ≤ 2-k • Two distributions are ε-closeif the probability they assign to any event differs by at most ε.

  12. Zimand: explicit version of Yao’s lemma for decision trees Zimand defines and constructs a stronger variant of seeded extractors E(x,y) called “exposure resilient extractors” . He considers: B(x)=maj seeds yA(x,E(x,y)) Thm: [Zimand07] If A is a randomized decision tree with q queries that tosses q random coins then: • B succeeds on most inputs. (a (1-ρ)-fraction). • B can be implemented by a deterministic decision tree with qO(1) queries. Zimand states his result a bit differently. We improve to O(q)

  13. Our results • Develop a general technique to prove explicit versions of Yao’s Lemma (that is weak derandomization results). • Use deterministic (seedless) extractors that is B(x)=A(x,E(x)) where E is a seedless extractor. • The technique applies to any class of algorithms with |r|·|x|. Can sometimes handle |r|>|x| using PRGs. • More precisely: Every class of randomized algorithm defines a class C of distributions. An explicit construction of an extractor for C immediately gives an explicit version of Yao’s Lemma (as long as |r|·|x|).

  14. Explicit version of Yao’s lemma for communication games Thm: If A is a randomized(public coin) communication game with communication complexity q that tosses m<n random coins then set B(x)=A(x,E(x)) where E is a “2-source extractor”. • B succeeds on most inputs. A (1-ρ)-fraction (or even a (1-2-(m+q))-fraction). • B can be implemented by a deterministic communication game with communication complexity O(m+q). Dfn: A communication game is explicit if each party can compute its next message in poly-time (given his input, history and random coins). Cor: Given an explicit randomized communication game with complexity q and m coins there is an explicit deterministic communicaion game with complexity O(m+q) that succeeds on a (1-2-(m+q)) fraction of the inputs. Both complexity and uniformity are preserved

  15. Explicit weak derandomization results

  16. Constant depth algorithms Consider randomized algorithms A(x,r) that are computable by uniform families of poly-size constant depth circuits. [NW,K] : Strong derandomization in quasi-poly time. Namely, there is a uniform family of quasi-poly-size circuits that succeed on all inputs. Our result:Weak derandomization in poly-time. Namely, there is a uniform family of poly-size circuits that succeed on most inputs. (can also preserve constant depth). High level idea: • Reduce # of random coins of A from nc to (log n)O(1) using a PRG. (Based on the hardness of the parity function [H,RS]) • Extract random coins from input x using an extractor for “sources recognizable by AC0 circuits”. • Construct extractors using the hardness of the parity function and ideas from [NW,TV].

  17. High level overview of the proof To be concrete we consider communication games

  18. Preparations Thm: If A is a randomized communication game with communication complexity q that tosses m random coins then set B(x)=A(x,E(x)) where E is a 2-source extractor. • B succeeds on most inputs. A (1-ρ)-fraction. • B can be implemented by a deterministic communication game with communication complexity O(m+q). Define independent random variables X,R by: X←Un, R←Um. We have that: ∀x: Pr[A(x,R)=f(x)]¸1-ρ It follows that: a := Pr[A(X,R)=f(X)]¸1-ρ We need to show: b := Pr[A(X,E(X))=f(X)]¸1-ρ–(2ρ+2-2m). The plan is to show that b ¸ a–(2ρ + 2-2m).

  19. At the end of protocol all inputs in a rectangle answer the same way. Consider the entropy in the variable X|rectangle = (X|Qr(X)=v). Independent of answer. Idea: extract the randomness from this entropy. Doesn’t make sense: rectangle is defined only after random coins r are fixed. High level intuition x2 • For every choice of random coins r the game A(∙,r) is deterministic w/complexity q. • It divides the set of strings x of length n into 2q rectangles. • Let Qr(x) denote the rectangle of x. x1 Qr(x1,x2) Rectangle = 2-source

  20. a = Pr[A(X,R)=f(X)] =ΣrPr[A(X,R)=f(X) ⋀ R=r] =ΣrΣvPr[A(X,R)=f(X) ⋀ R=r ⋀ Qr(X)=v] =ΣrΣvPr[A(X,r)= f(X) ⋀ R=r ⋀ Qr(X)=v] =ΣrΣvPr[Qr(X)=v]∙Pr[R=r|Qr(X)=v]∙Pr[A(X,r)=f(X)|R=r⋀Qr(X)=v] Averaging over random coins and rectangles x2 • For every choice of random coins r the game A(∙,r) is deterministic w/complexity q. • It divides the set of strings x of length n into 2q rectangles. • Let Qr(x) denote the rectangle of x. x1 Qr(x1,x2)

  21. b = Pr[A(X,E(X))=f(X)] =ΣrPr[A(X, E(X))=f(X) ⋀E(X)=r] =ΣrΣvPr[A(X,E(X))=f(X) ⋀E(X)=r ⋀ Qr(X)=v] =ΣrΣvPr[A(X,r)= f(X) ⋀E(X)=r ⋀ Qr(X)=v] = ΣrΣvPr[Qr(X)=v]∙Pr[E(X)=r|Qr(X)=v] ∙ Pr[A(X,r)=f(X)|E(X)=r⋀Qr(X)=v] Averaging over random coins and rectangles x2 • For every choice of random coins r the game A(∙,r) is deterministic w/complexity q. • It divides the set of strings x of length n into 2q rectangles. • Let Qr(x) denote the rectangle of x. x1 Qr(x1,x2)

  22. Proof (continued) a= Pr[A(X,R)=f(X)] ΣrΣvPr[Qr(X)=v]∙Pr[R=r |Qr(X)=v]∙Pr[A(X,r)=f(X)|R=r ⋀Qr(X)=v] ΣrΣvPr[Qr(X)=v]∙Pr[E(X)=r|Qr(X)=v]∙Pr[A(X,r)=f(X)|E(X)=r⋀Qr(X)=v] b= Pr[A(X,E(X))=f(X)]

  23. Would be fine if f was also constant over rectangle Qr(x1,x2)=v Proof (continued) a= Pr[A(X,R)=f(X)] ΣrΣvPr[Qr(X)=v]∙Pr[R=r |Qr(X)=v]∙Pr[A(X,r)=f(X)|R=r ⋀Qr(X)=v] v 2-m x2 Problem: It could be that A(•,r) does well on rectangle but poorly on {E(X)=r} Note: A(•,r) is constant over rectangle. R is uniform and independent of X x1 E is a 2-source extractor and {Qr(X)=v} is arectangle {E(X)=r} v ≈2-m ΣrΣvPr[Qr(X)=v]∙Pr[E(X)=r|Qr(X)=v]∙Pr[A(X,r)=f(X)|E(X)=r⋀Qr(X)=v] b= Pr[A(X,E(X))=f(X)]

  24. Modifying the argument We have that: Pr[A(X,R)=f(x)]¸1-ρ By Yao’s lemma deterministic game F w/complexity q Pr[F(X)=f(X)]¸1-ρ Consider randomized algorithm A’(x,r) which • Simulates A(x,r) • Simulates F(x) Let Qr(x) denote the rectangle of A’ and note that: • A(∙,r) isconstanton rectangle{Qr(X)=v}. • F(x) isconstanton rectangle{Qr(X)=v}.

  25. Would be fine if f was also constant over rectangle Qr(x1,x2)=v Proof (continued) a= Pr[A(X,R)=f(X)] ΣrΣvPr[Qr(X)=v]∙Pr[R=r |Qr(X)=v]∙Pr[A(X,r)=f(X)|R=r ⋀Qr(X)=v] v 2-m x2 Problem: It could be that A(•,r) does well on rectangle but poorly on {E(X)=r} Note: A(•,r) is constant over rectangle. R is uniform and independent of X x1 E is a 2-source extractor and {Qr(X)=v} is arectangle {E(X)=r} v ≈2-m ΣrΣvPr[Qr(X)=v]∙Pr[E(X)=r|Qr(X)=v]∙Pr[A(X,r)=f(X)|E(X)=r⋀Qr(X)=v] b= Pr[A(X,E(X))=f(X)]

  26. We have that F is constant over rectangle! Qr(x1,x2)=v Proof (replace f→F) a’= Pr[A(X,R)=F(X)] |a’-a|· ρ ΣrΣvPr[Qr(X)=v]∙Pr[R=r |Qr(X)=v]∙Pr[A(X,r)=F(X)|R=r ⋀Qr(X)=v] v 2-m x2 Problem: It could be that A(•,r) does well on rectangle but poorly on {E(X)=r} Note: A(•,r) is constant over rectangle. R is uniform and independent of X x1 E is a 2-source extractor and {Qr(X)=v} is arectangle {E(X)=r} v ≈2-m ΣrΣvPr[Qr(X)=v]∙Pr[E(X)=r|Qr(X)=v]∙Pr[A(X,r)=F(X)|E(X)=r⋀Qr(X)=v] b’= Pr[A(X,E(X))=F(X)] |b’-b|· ρ

  27. Finishing up Thm: If A is a randomized communication game with communication complexity q that tosses m random coins then set B(x)=A(x,E(x)) where E is a 2-source extractor. • B succeeds on most inputs. A (1-ρ)-fraction. • B can be implemented by a deterministic communication game with communication complexity O(m+q). • 2-source extractors cannot be computed by communication games. • However, we need extractors for “relatively large rectangles”. Namely 2-source extractors for min-entropy n-(m+q). • Each of the two parties can send the first 3(m+q) bits of his input. The sent strings have entropy rate at least ½. • Run explicit 2-source extractor on substrings. q.e.d. ???

  28. Generalizing the argument • Consider e.g. randomized decision trees A(x,r). • Define Qr(x) to be the leaf the decision tree A(∙,r) reaches when reading x. • Simply repeat argument noting that {Qr(X)=v} is a bit-fixing source. • More generally, for any class of randomized algorithms we can set Qr(x)=A(x,r) • Can do the argument if we can explicitly construct extractors for distributions that are uniform over {Qr(X)=v} = {A(X,r)=v}. • Loosely speaking, need extractors for sources recognizable by functions of the form A(∙,r). • There is a generic way to construct them from a function that cannot be approximated by functions of the form A(∙,r).

  29. Conclusion and open problem • Loosely speaking: Whenever we have a function that is hard on average against a nonuniform version of a computational model we get an explicit version of Yao’s lemma (that is explicit weak derandomization) for the model. • Can handle AC0 using the hardness of parity. • Gives a conditional weak derandomization for general poly-time algorithms. Assumption is incomparable to [NW,GW]. • Open problems: • Other ways to handle |r| > |x|. • Distributions that aren’t uniform.

  30. That’s it…

More Related