1 / 30

Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees

Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees. Nick Harvey U. Waterloo C&O Joint work with Isaac Fung. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A. 5. 6. 4. 3. 7. What is the max flow from s to t?. 5. 5. s. t. 6. 4.

lan
Download Presentation

Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Graph Sparsifiers byEdge-Connectivity andRandom Spanning Trees Nick HarveyU. Waterloo C&O Joint work with Isaac Fung TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA

  2. 5 6 4 3 7 • What is the max flow from s to t? 5 5 s t 6 4 7 3 5

  3. u 15 15 • What is the max flow from s to t? • The answer in this graph is the same: it’s a Gomory-Hu tree. • What is capacity of all edges incident on u? 15 s t 10 15 15 15

  4. Can any dense graph be“approximated” by a sparse graph? • Approximating by trees • Low-stretch trees: number of edges = n-1,“most” distances approximated to within log n. [FRT’04] • Low-congestion trees: number of edges = n-1,“most” cuts approximated to within log n. [R’08] • Approximating all cuts • Sparsifiers: number of edges = O(n log n /²2) ,every cut approximated within 1+². [BK’96] • Spectral approximation • Spectral sparsifiers: number of edges = O(n log n /²2), entire spectrum approximated within 1+². [SS’08] n = # vertices [BSS’09] [BSS’09]

  5. What is the point of all this? • Approximating pairwise distances • Low stretch / congestion trees: • Approximating metrics by simpler metrics • Approximation algorithms • Online algorithms

  6. n = # verticesm = # edges What is the point of all this? • Approximating all cuts • Sparsifiers: fast algorithms for cut/flow problem v = flow value

  7. What is the point of all this? • Spectral approximation • Spectral sparsifiers: solving diagonally-dominant linear systems in nearly linear time! Dimensionality reduction in L1,Restricted Invertibility...

  8. n = # verticesm = # edges Graph Sparsifiers:Formal problem statement • Design an algorithm such that • Input: An undirected graph G=(V,E) • Output: A weighted subgraph H=(V,F,w),where FµE and w : F !R • Goals: • | |±G(U)| - w(±H(U)) | ·² |±G(U)| 8U µ V • |F| = O(n log n / ²2) • Running time = O~( m / ²2 ) # edges between U and V\U in G weight of edges between U and V\U in H • | |±(U)| - w(±(U)) | ·² |±(U)| 8U µ V

  9. Why should sparsifiers exist? • Example: G = Complete graph Kn • Sampling: Construct H by sampling every edge of Gwith probability p=100 log n/n • Properties of H: • # sampled edges = O(n log n) • Standard fact: H is connected • Stronger fact: p|±G(U)| ¼ |±H(U)| 8U µ V • Output: • H with each edge given weight 1/p • By this, H is a sparsifier of G

  10. Chernoff Bound:Let X1,X2,... be {0,1} random variables.Let X = iXi and let ¹ = E[ X ].For any ±2[0,1], Pr[ |X-¹| ¸±¹ ] · 2 exp( -±2¹ / 3 ). • Consider any cut ±G(U) with |U|=k. Then |±G(U)|¸kn/2. • Let Xe = 1 if edge e is sampled. Let X = e2CXe = |±H(U)|. • Then ¹ = E[X] = p|±(U)| ¸ 50 k log n. • Say cut fails if|X-¹| ¸¹/2. • So Pr[ cut fails ] · 2 exp( - ¹/12 ) · n-4k. • # of cuts with |U|=k is . • So Pr[ any cut fails ] ·k n-4k < k n-3k < n-2. • So, whp, every U has ||±H(U)|-p|±(U)|| < p|±(U)|/2. Key Ingredients Chernoff Bound Bound on # small cuts Union bound

  11. Generalize to arbitrary G? Eliminate most of these • Can’t sample edges with same probability! • Idea [BK’96]Sample low-connectivity edges with high probability, and high-connectivity edges with low probability Keep this

  12. Non-uniform sampling algorithm [BK’96] • Input: Graph G=(V,E), parameters pe2 [0,1] • Output: A weighted subgraph H=(V,F,w),where FµE and w : F !R • For i=1 to ½ • For each edge e2E • With probability pe, Add e to F Increase we by 1/(½pe) • Main Question: Can we choose ½ and pe’sto achieve sparsification goals?

  13. Non-uniform sampling algorithm [BK’96] • Input: Graph G=(V,E), parameters pe2 [0,1] • Output: A weighted subgraph H=(V,F,w),where FµE and w : F !R • For i=1 to ½ • For each edge e2E • With probability pe, Add e to F Increase we by 1/(½pe) • Claim: H perfectly approximates G in expectation! • For any e2E, E[ we ] = 1 • ) For every UµV, E[ w(±H(U)) ] = |±G(U)| • Goal: Show every w(±H(U)) is tightly concentrated

  14. Assume ² is constant Prior Work What on earth is this? Similar to edge connectivity. • Benczur-Karger ‘96 • Set ½ = O(log n), pe = 1/“strength” of edge e(max k s.t. e is contained in a k-edge-connected vertex-induced subgraph of G) • All cuts are preserved • epe·n ) |F| = O(n log n) • Running time is O(m log3 n) • Spielman-Srivastava ‘08 • Set ½ = O(log n), pe = “effective resistance” of edge e(view G as an electrical network where each edge is a 1-ohm resistor) • H is a spectral sparsifier of G ) all cuts are preserved • epe=n-1 ) |F| = O(n log n) • Running time is O(m log50 n) • Uses powerful tools from Geometric Functional Analysis O(m log3 n)[Koutis-Miller-Peng ’10]

  15. Assume ² is constant Our Work • Fung-Harvey ’10 (and independently Hariharan-Panigrahi ‘10) • Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e(min size of a cut that contains e) • Advantages: Edge connectivities natural, easy to compute Implies previous algorithms. (except spectral sparsification) • All cuts are preserved • epe·n ) |F| = O(n log2 n) • Running time is O(m log2 n) • Alternative Algorithm • Let H be union of ½uniformly random spanning trees of G,where we is 1/(½¢(effective resistance of e)) • All cuts are preserved • |F| = O(n log2 n) • Running time is

  16. Notation: kuv = min size of a cut separating u and v • Main ideas: • Partition edges into connectivity classesE = E1[ E2 [ ... Elog n where Ei = { e : 2i-1·ke<2i }

  17. Notation: kuv = min size of a cut separating u and v • Main ideas: • Partition edges into connectivity classesE = E1[E2[ ... Elog nwhere Ei = { e : 2i-1·ke<2i } • Prove weight of sampled edges that each cuttakes from each connectivity class is about right • This yields a sparsifier U

  18. Notation: • C = ±(U) is a cut • Ci= ±(U) ÅEi is a cut-induced set • Need to prove: Prove weight of sampled edges that each cuttakes from each connectivity class is about right C2 C3 C1 C4

  19. Notation:Ci= ±(U) ÅEi is a cut-induced set Prove 8 cut-induced set Ci • Key Ingredients • Chernoff bound: Provesmall • Bound on # small cuts: Prove #{ cut-induced sets Ciinduced by a small cut |C| }is small. • Union bound: sum of failure probabilities is small,so probably no failures. C2 C3 C1 C4

  20. Counting Small Cut-Induced Sets • Theorem: Let G=(V,E) be a graph. Fix any BµE. Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v) Then, for every ®¸1,|{ ±(U) ÅB : |±(U)|·®K }| < n2®. • Corollary: Counting Small Cuts[K’93] Let G=(V,E) be a graph. Let K be the edge-connectivity of G. (i.e., global min cut value) Then, for every ®¸1,|{ ±(U) : |±(U)|·®K }| < n2®.

  21. Comparison (Slightly unfair) • Theorem: Let G=(V,E) be a graph. Fix any BµE. Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v) Then |{ ±(U) ÅB : |±(U)|·c }| < n2c/K8c¸1. • Corollary [K’93]: Let G=(V,E) be a graph. Let K be the edge-connectivity of G. (i.e., global min cut value) Then, |{ ±(U) : |±(U)|·c }| < n2c/K 8c¸1. • How many cuts of size 1? Theorem says < n2, taking K=c=1. Corollary, says < 1, because K=0.

  22. Algorithm For Finding Needle in Haystack • Input: A haystack • Output: A needle (maybe) • While haystack not too small • Pick a random handful • Throw it away • End While • Output whatever is left

  23. Algorithm for Finding a Min Cut [K’93] • Input: A graph • Output: A minimum cut (maybe) • While graph has  2 vertices “Not too small” • Pick an edge at random “Random Handful” • Contract it “Throw it away” • End While • Output remaining edges • Claim: For any min cut, this algorithm outputs it with probability ¸ 1/n2. • Corollary: There are · n2 min cuts.

  24. Finding a Small Cut-Induced Set • Input: A graph G=(V,E), and BµE • Output: A cut-induced subset of B • While graph has  2 vertices • If some vertex v has no incident edges in B • Split-off all edges at v and delete v • Pick an edge at random • Contract it • End While • Output remaining edges in B • Claim: For any min cut-induced subset of B, this algorithm outputs it with probability >1/n2. • Corollary: There are <n2 min cut-induced subsets of B

  25. Sparsifiers from Random Spanning Trees • Let H be union of ½=log2 n uniform random spanning trees,where we is 1/(½¢(effective resistance of e)) • Then all cuts are preserved and |F| = O(n log2 n) • Why does this work? • PrT[ e 2 T ] = effective resistance of edge e • Similar to usual independent sampling algorithm,with pe = effective resistance of e • Key difference: edges in a random spanning tree arenot independent. • But, they are negatively correlated!That is enough to make Chernoff bounds work.

  26. Conclusions • Graph sparsifiers important for fast algorithms and some combinatorial theorems • Sampling by edge-connectivities gives a sparsifierwith O(n log2 n) edges • Improvements: O(n log n) edges in O(m + n log3.5n) time[Joint with Hariharan and Panigrahi] • Also true for sampling by effective resistances. ) sampling O(log2 n) random spanning trees gives a sparsifier. Questions • Improve log2 n to log n? • Sampling o(log n) random trees gives a sparsifier?

  27. Analysis of Min Cut Algorithm • While graph has  2 vertices “Not too small” • Pick an edge uv at random “Random Handful” • Contract it “Throw it away” • End While • Output remaining edges • Fix some min cut. Say it has k edges. • If algorithm doesn’t contract any edge in this cut, then the algorithm outputs this cut • When contracting edge uv, both u & v are on same side of cut • So what is probability that this happens?

  28. Initially there are n vertices. • Claim 1: # edges in min cut=k  every vertex has degree  k  total # edges  nk/2 • Pr[random edge is in min cut] = # edges in min cut / total # edges k / (nk/2) = 2/n

  29. Now there are n-1 vertices. • Claim 2: min cut in remaining graph is  k • Why? Every cut in remaining graph is also a cut in original graph. • So, Pr[ random edge is in min cut ]  2/(n-1)

  30. In general, when there are i vertices left Pr[ random edge is in min cut ]  2/i • So Pr[ alg never contracts an edge in min cut ]

More Related