1 / 21

On the tightness of Buhrman-Cleve-Wigderson simulation

On the relation between decision tree complexity and communication complexity. On the tightness of Buhrman-Cleve-Wigderson simulation. Shengyu Zhang The Chinese University of Hong Kong. Two concrete models. Two concrete models for studying complexity: Decision tree complexity

liseli
Download Presentation

On the tightness of Buhrman-Cleve-Wigderson simulation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the relation between decision tree complexity and communication complexity On the tightness of Buhrman-Cleve-Wigderson simulation Shengyu Zhang The Chinese University of Hong Kong

  2. Two concrete models • Two concrete models for studying complexity: • Decision tree complexity • Communication complexity

  3. Decision Tree Complexity • Task: compute f(x) • The input x can be accessed by querying xi’s • We only care about the number of queries made • Query (decision tree) complexity: min # queries needed. f(x1,x2,x3)=x1∧(x2∨x3) x1 = ? 0 1 f(x1,x2,x3)=0 x2 = ? 0 1 x3 = ? f(x1,x2,x3)=1 0 1 f(x1,x2,x3)=0 f(x1,x2,x3)=1

  4. Randomized/Quantum query models • Randomized query model: • We can toss a coin to decide the next query. • Quantum query model: • Instead of coin-tossing, we query for all variables in superposition. • |i, a, z → |i, axi, z • i: the position we are interested in • a: the register holding the queried variable • z: other part of the work space • i,a,zαi,a,z|i, a, z → i,a,zαi,a,z|i, axi, z • DTD(f), DTR(f), DTQ(f): deterministic, randomized, and quantum query complexities.

  5. Communication complexity • [Yao79] Two parties, Alice and Bob, jointly compute a function F(x,y) with x known only to Alice and y only to Bob. • Communication complexity: how many bits are needed to be exchanged? --- CCD(F) x y Alice Bob F(x,y) F(x,y)

  6. Various modes • Randomized: Alice and Bob can toss coins, and a small error probability is allowed. --- CCR(f) • Quantum: Alice and Bob have quantum computers and send quantum messages. --- CCQ(f)

  7. Applications of CC • Though defined in an info theoretical setting, it turned out to provide lower bounds to many computational models. • Data structures, circuit complexity, streaming algorithms, decision tree complexity, VLSI, algorithmic game theory, optimization, pseudo-randomness…

  8. Question: Any relation between the two well-studied complexity measures?

  9. One simple bound • Composed functions: F(x,y) = f∘g (x,y) = f(g1(x(1),y(1)), …, gn(x(n), y(n))) • f is an n-bit function, gi is a Boolean function. • x(i) is the i-th block of x. • [Thm*1] CC(F) = O(DT(f) maxiCC(gi)). • A log factor is needed in the bounded-error randomized and quantum models. • Proof: Alice runs the DT algorithm for f(z). Whenever she wants zi, she computes gi(x(i),y(i)) by communicating with Bob. *1. H. Buhrman, R. Cleve, A. Wigderson. STOC, 1998.

  10. A lower bound method for DT • Composed functions: F(x,y) = f(g1(x(1),y(1)), …, gn(x(n), y(n))) • [Thm] CC(F) = O(DT(f) maxiCC(gi)). • Turning the relation around, we have a lower bound for DT(f) by CC(f(g1, …, gn)): DT(f) = Ω(CC(F)/maxiCC(gi)) • In particular, if |Domain(gi)| = O(1), then DT(f) = Ω(CC(f∘g))

  11. How tight is the bound? • Unfortunately, the bound is also known to be loose in general. • f = Parity, g = ⊕: F = Parity(x⊕y) • Obs: F = Parity(x) ⊕ Parity(y). • So CCD(F) = 1, but DTQ(f) = Ω(n). • Similar examples: • f = ANDn, g = AND2, • f = ORn, g = OR2.

  12. Tightness • Question: Can we choose gi’s s.t. CC(f∘g) = Θ(DT(f) maxiCC(gi))? • Question: Can we choose gi’s with O(1) input size s.t. CC(f∘g) = Θ(DT(f))? • Theorem:Ǝgi∊{٧2,٨2} s.t. CC(f∘g) = poly(DT(f)).

  13. = = R D R D 1 2 1 3 ( ) ( ( ) ) ( ) ( ( ) ) f f C C ­ D T f f C C ­ D T ± m a x g ± m a x g = = ; ; f g f g n n 2 ^ _ 2 ^ _ g g i ; ; = = Q D 1 4 Q D 1 6 ( ) ( ( ) ) ( ) ( ( ) ) f f C C ­ D T f f C C ­ D T ± m a x g ± m a x g = = : : f g f g n n 2 ^ _ 2 ^ _ g g i ; ; More precisely • Theorem 1. For all Boolean functions, • Theorem 2. For all monotone Boolean functions, • Improve Thm 1 on bounds and range of max.

  14. µ µ ¶ ¶ D D Q Q 6 4 ( ( ) ) ( ( ) ) f f f f C C C C O O C C C C ± ± ± ± m m a a x x g g m m a a x x g g = = : : f f g g f f g g n n n n 2 2 2 2 ^ ^ _ _ ^ ^ _ _ g g g g i i ; ; ; ; Implications • A fundamental question: Are classical and quantum communication complexities polynomially related? • Largest gap: quadratic (by Disjointness) • Corollary: For all Boolean functions f, For all monotone Boolean functions f, Sherstov 12

  15. Proof • [Block sensitivity] • f: function, • x: input, • xI (I⊆[n]): flipping variables in I • bs(f,x): max number b of disjoint sets I1, …, Ib flipping each of which changes f-value (i.e. f(x) ≠ f(xI_b)). • bs(f): maxx bs(f,x) • DTD(f) = O(bs3(f)) for general Boolean f, DTD(f) = O(bs2(f)) for monotone Boolean f.

  16. = R R D 1 3 ( ( ) ) ( ( ( ( ) ) ) ) f f b f f C C C C ­ ­ D T ± ± m m a x a x g g s = = ; ; f f g g 2 2 ^ _ ^ _ g g i i ; ; = p Q D 1 6 Q ( ) ( ( ) ) ( ) ( ( ) ) f f C C ­ D T f b f C C ­ ± m a x g ± m a x g s = = : : f g f g 2 ^ _ 2 ^ _ g g i i ; ; Through block sensitivity • Goal: • Known: DTD(f) = O(bs3(f)) for general Boolean f. • So it’s enough to prove

  17. R Q 1 2 p ( ) ( ) ( ) ( ) ¤ ¤ C C U D £ C C U D £ i j i j s n s n = = ; Disjointness • Disj(x,y) = OR(x٨y). • UDisj(x,y): Disj with promise that |x٨y| ≤ 1. • Theorem • Idea (for our proof): Pick gi’s s.t. f∘g embeds an instance of UDisj(x,y) of size bs(f). *1: B. Kalyanasundaram and G. Schintger, SIAMJoDM, 1992. Z. Bar-Yossef, T. Jayram, R. Kumar, D. Sivakumar, JCSS, 2004. A. Razborov, TCS, 1992. *2: A. Razborov, IM, 2003. A. Sherstov, SIAMJoC, 2009.

  18. bs is Unique OR of flipping blocks • Protocol for f(g1, …, gn) →Protocol for UDisjb. (b = bs(f)). • Input (x’,y’)∊{0,1}2n←Input (x,y)∊{0,1}2b • Suppose bs(f) is achieved by z and blocks I1, …, Ib. • i ∉ any block: x’i = y’i = zi, gi = ٨. • i ∊ Ij: x’i = xj, y’i = yi, gi = ٨, if zi = 0 x’i = ¬xj, y’i = ¬yi, gi = ٧, if zi = 1 • ∃! j s.t. g(x’,y’) = zI_j ⇔ ∃! j s.t. xj٨yj = 1. gi(x’i, y’i) = zi xj٨yj = 1 ⇔ gi(x’i, y’i) = ¬zi,∀i∊Ij

  19. Concluding remarks • For monotone functions, observe that each sensitive block contains all 0 or all 1. • Using pattern matrix*1 and its extension*2, one can show that CCQ(f∘g) = Ω(degε(f)) for some constant size functions g. • Improving the previous: degε(f) = Ω(bs(f)1/2) *1: A. Sherstov, SIAMJoC, 2009 *2: T. Lee, S. Zhang, manuscript, 2008.

  20. About the embedding idea • Theorem*1. CCR((NAND-formula ∘ NAND) = Ω(n/8d). • The simple idea of embedding Disj instance was later applied to show depth-independent lower bound: • CCR = Ω(n1/2). • CCQ = Ω(n1/4). • arXiv:0908.4453, with Jain and Klauck. *1: Leonardos and Saks, CCC, 2009. Jayram, Kopparty and Raghavendra, CCC, 2009.

  21. Question: Can we choose gi’s s.t. CC(f∘g) = Θ(DT(f) maxiCC(gi))?

More Related