1 / 22

Randomness Conductors

Condensers. Expander Graphs. Universal Hash Functions. Randomness Extractors. Randomness Conductors. N. x ’. Randomness Conductors Meta-Definition. An R-conductor if for every (k,k’)  R , X has  k bits of “entropy”  X’ has  k’ bits of “entropy”. M. Prob. dist. X.

duane
Download Presentation

Randomness Conductors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Condensers Expander Graphs Universal Hash Functions . . . . . . . . . . . . Randomness Extractors Randomness Conductors

  2. N x’ Randomness Conductors Meta-Definition An R-conductor if for every (k,k’)  R, X has k bits of “entropy” X’ has k’ bits of “entropy”. M Prob. dist. X Prob. dist. X’ D x

  3. Plan Definitions & Applications: • The balanced case (M = N). • Vertex Expansion. • 2nd Eigenvalue Expansion. • The unbalanced case (M ≪ N). • Extractors, Dispersers, Condensers. Conductors • Universal Hash Functions. Constructions: • Zigzag Product & Loosless Expanders.

  4. N N S, |S| K |(S)|  A |S| (A > 1) D (Bipartite) Expander Graphs Important: every (not too large) set expands.

  5. N N S, |S| K |(S)|  A |S| (A > 1) D (Bipartite) Expander Graphs • Main goal: minimize D(i.e. constant D) • Degree 3 random graphs are expanders! [Pin73]

  6. N N S, |S| K |(S)|  A |S| (A > 1) D (Bipartite) Expander Graphs Also: maximize A. • Trivial upper bound: A  D • even A ≲ D-1 • Random graphs: AD-1

  7. Applications of Expanders These “innocent” objects are intimately related to various fundamental problems: • Network design (fault tolerance), • Sorting networks, • Complexity and proof theory, • Derandomization, • Error correcting codes, • Cryptography, • Ramsey theory • And more ...

  8. Non-blocking Network with On-line Path Selection [ALM] N (Inputs) N (Outputs) Depth O(log N), size O(N log N), bounded degree. Allows connection between input nodes and output nodes using vertex disjoint paths.

  9. Non-blocking Network with On-line Path Selection [ALM] N (Inputs) N (Outputs) • Every request for connection (or disconnection) is satisfied in O(log N) bit steps: • On line. Handles many requests in parallel.

  10. “Lossless” Expander The Network N (Inputs) N (Inputs)

  11. N M= N D S, |S| K |(S)| 0.9 D |S| 0< 1 is an arbitrary constant D is constant & K= (M/D) =  (N/D). Slightly Unbalanced, “Lossless” Expanders [CRVW 02]: such expanders (with D = polylog(1/))

  12. Unique neighbor of S Non Unique neighbor Property 1: A Very Strong Unique Neighbor Property S, |S| K, |(S)| 0.9 D |S| S S has 0.8 D |S| unique neighbors !

  13. S` Step I: match S to its unique neighbors. Continue recursively with unmatched vertices S’. Using Unique Neighbors for Distributed Routing Task: match S to its neighbors (|S| K) S

  14. Adding new paths: think of vertices used by previous paths as faulty. Reminder: The Network

  15. Remains a lossless expander even if adversary removes (0.7 D) edges from each vertex. Property 2: Incredibly Fault Tolerant S, |S| K, |(S)| 0.9 D |S|

  16. + + 1 0 0 + 0 1 + 1 Simple Expander Codes [G63,Z71,ZP76,T81,SS96] N (Variables) M= N (Parity Checks) Linear code; Rate 1 – M/N= (1 -  ). Minimum distanceK. Relative distanceK/N= ( / D) =  / polylog (1/). For small  beats the Zyablov bound and is quite close to the Gilbert-Varshamov bound of  / log (1/).

  17. + + 1 Error set B, |B| K/2 0 |(B)| > .9 D |B| 0 1 + |(B)Sat|< .2 D|B| 1 0 + 0 1 1 1 0 0 Simple Decoding Algorithm in Linear Time (& log n parallel phases) [SS 96] N (Variables) M= N (Constraints) • Algorithm: At each phase, flip every variable that “sees” a majority of 1’s (i.e, unsatisfied constraints). |Flip\B| |B|/4|B\Flip| |B|/4 |Bnew||B|/2

  18. x1 x2 xi Random Walk on Expanders [AKS 87] xi converges to uniform fast (for arbitrary x0). For a random x0: the sequencex0, x1, x2 . . . has interesting “random-like” properties. ... x0

  19. N x’ Expanders Add Entropy M • Definition we gave: |Support(X’)|  A |Support(X)| • Applications of the random walk rely on “less naïve” measures of entropy. • Almost all explicit constructions directly give “2nd eigenvalue expansion”. • Can be interpreted in terms of Renyi entropy. Prob. dist. X Induced dist. X’ D x

  20. Symmetric N N  D D 2nd Eigenvalue Expansion G - Undirected • P=(Pi,j) - transition probabilities matrix: Pi,j= (# edges between i and j in G) / D • Goal: If [0,1]n is a (non-uniform) distribution on vertices of G, then P is “closer to uniform.” N

  21. 2nd Eigenvalue Expansion • 0  1  …  N-1, eigenvalues of P. • 0 =1, Corresponding eigenvector v0=1N:P(Uniform)=Uniform • Second eigenvalue (in absolute value):=(G)=max{|1|,|N-1|} • G connected and non-bipartite  <1 •  is a good measure of the expansion of G [Tan84, AM84, Alo86]. Qualitatively: G is an expander (G) < β < 1

  22. Randomness conductors: • As in extractors. • Allows the entire spectrum. Randomness Conductors • Expanders, extractors, condensers & hash functions are all functions, f : [N]  [D]  [M], that transform: X “of entropy” kX’ =f (X,Uniform) “of entropy” k’ • Many flavors: • Measure of entropy. • Balanced vs. unbalanced. • Lossless vs. lossy. • Lower vs. upper bound on k. • Is X’ close to uniform? • …

More Related