1 / 35

Graph Sparsifiers : A Survey

Graph Sparsifiers : A Survey. Based on work by: Batson, Benczur , de Carli Silva, Fung, Hariharan , Harvey, Karger , Panigrahi , Sato, Spielman , Srivastava and Teng. Nick Harvey UBC. Approximating Dense Objects by Sparse Ones. Floor joists Image compression.

una
Download Presentation

Graph Sparsifiers : A Survey

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Graph Sparsifiers: A Survey Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman, Srivastava and Teng Nick HarveyUBC

  2. Approximating Dense Objects by Sparse Ones • Floor joists • Image compression

  3. Approximating Dense Graphsby Sparse Ones • Spanners: Approximate distances to within ®using only = O(n1+2/®) edges • Low-stretch trees: Approximate most distancesto within O(log n) using only n-1 edges (n = # vertices)

  4. Overview • Definitions • Cut & Spectral Sparsifiers • Applications • Cut Sparsifiers • Spectral Sparsifiers • A random sampling construction • Derandomization

  5. (Karger ‘94) Cut Sparsifiers • Input: An undirected graph G=(V,E) with weights u : E !R+ • Output: A subgraph H=(V,F) of G with weightsw : F!R+ such that |F| is small and w(±H(U)) = (1§²) u(±G(U)) 8Uµ V weight of edges between U and V\U in H weight of edges between U and V\U in G U U

  6. (Karger ‘94) Cut Sparsifiers • Input: An undirected graph G=(V,E) with weights u : E !R+ • Output: A subgraph H=(V,F) of G with weightsw : F!R+ such that |F| is small and w(±H(U)) = (1§²) u(±G(U)) 8Uµ V weight of edges between U and V\U in H weight of edges between U and V\U in G

  7. Generic Applicationof Cut Sparsifiers (Slow) Algorithm A for some problem P (Dense) Input graph G Exact/Approx Output Min s-t cut, Sparsest cut,Max cut, … (Efficient) Sparsification Algorithm S Algorithm A(now faster) Sparse graph H approx preserving solution of P Approximate Output

  8. Relation to Expander Graphs • Graph H on V is an expander if, for some constant c,|±H(U)| ¸c |U| 8UµV, |U|·n/2 • Let G be the complete graph on V.If we give all edges of H weight w=n, thenw(±H(U)) ¸ cn|U|¼c|±G(U)| 8UµV, |U|·n/2 • Expanders are similar to sparsifiers of complete graph G H

  9. Relation to Expander Graphs • Simple Random Construction:Erdos-Renyi graph Gnp is an expander if p=£(log(n)/n),with high probability. This gives an expander with£(n log n) edges with high probability. But aren’t there much better expanders? G H

  10. (Spielman-Teng ‘04) Spectral Sparsifiers • Input: An undirected graph G=(V,E) withweights u : E !R+ • Def: The Laplacianis the matrix LG such thatxTLGx = st2Eust (xs-xt)28x2RV. • LG is positive semidefinite since this is ¸ 0. • Example: Electrical Networks • View edge st as resistor of resistance 1/ust. • Impose voltage xv at every vertex v. • Ohm’s Power Law: P = V2/R. • Power consumed on edge st is ust (xs-xt)2. • Total power consumed is xTLGx.

  11. (Spielman-Teng ‘04) Spectral Sparsifiers • Input: An undirected graph G=(V,E) withweights u : E !R+ • Def: The Laplacianis the matrix LG such thatxTLGx = st2Eust (xs-xt)28x2RV. • Output: A subgraphH=(V,F) of G with weightsw : F!R such that |F| is small and xTLHx = (1 §²) xTLGx8x2RV w(±H(U)) = (1 § ²) u(±G(U)) 8Uµ V SpectralSparsifier ) ) Restrict to {0,1}-vectors CutSparsifier

  12. Cut vs Spectral Sparsifiers • Number of Constraints: • Cut: w(±H(U)) = (1§²) u(±G(U)) 8UµV (2n constraints) • Spectral: xTLHx = (1§²) xTLGx8x2RV (1 constraints) • Spectral constraints are SDP feasibility constraints: (1-²) xTLGx· xTLHx· (1+²) xTLGx8x2RV , (1-²) LG¹LH¹ (1+²) LG • Spectral constraints are actually easier to handle • Checking “Is H is a spectral sparsifier of G?” is in P • Checking “Is H is a cut sparsifier of G?” isnon-uniform sparsest cut, so NP-hard Here X ¹ Y means Y-X is positive semidefinite

  13. Application of Spectral Sparsifiers • Suppose you wanted to solve LGx = b • Since (1-²) LG¹LH¹(1+²) LGwe have (1-²) LG-1b·LH-1 b· (1+²) LG-1b • Hope: Solving LH-1 b is easier since H is sparse • Theorem:[Spielman-Teng ‘04, Koutis-Miller-Peng ‘10]An approximate solution to LGx = b can be found in O(m log n (log log n)2) time. (m = # edges of G)

  14. Results on Sparsifiers Cut Sparsifiers Spectral Sparsifiers Karger ‘94 Benczur-Karger ‘96 Combinatorial Fung-Hariharan-Harvey-Panigrahi ‘11 Spielman-Teng ‘04 Spielman-Srivastava ‘08 Linear Algebraic Batson-Spielman-Srivastava ‘09 de Carli Silva-Harvey-Sato ‘11 Construct sparsifiers with n logO(1) n / ²2 edges,in nearly linear time Construct sparsifiers with O(n/²2) edges,in poly(n) time

  15. Sparsifiers by Random Sampling • The complete graph is easy!Random sampling gives an expander (ie. sparsifier) with O(n log n) edges.

  16. Sparsifiers by Random Sampling Eliminate most of these • Can’t sample edges with same probability! • Idea [BK’96]Sample low-connectivity edges with high probability, and high-connectivity edges with low probability Keep this

  17. Non-uniform sampling algorithm [BK’96] • Input: Graph G=(V,E), weights u : E !R+ • Output: A subgraph H=(V,F) with weights w : F !R+ • Choose parameter ½ • Compute probabilities { pe:e2E} • For i=1 to ½ • For each edge e2E • With probability pe, Add e to F Increase we by ue/(½pe) Can we dothis so that thecut values aretightly concentratedand E[|F|]=nlogO(1)n? • Note: E[|F|] ·½¢epe • Note: E[ we ] = ue8e2E • ) For every UµV, E[ w(±H(U)) ] = u(±G(U))

  18. Benczur-Karger ‘96 • Input: Graph G=(V,E), weights u : E !R+ • Output: A subgraph H=(V,F) with weights w : F!R+ • Choose parameter ½ • Compute probabilities { pe:e2E} • For i=1 to ½ • For each edge e2E • With probability pe, Add e to F Increase we by ue/(½pe) Can we dothis so that thecut values aretightly concentratedand E[|F|]=nlogO(1)n? • But what is “strength”? • Can’t we use “connectivity”? Can approximateall values in m logO(1)n time. • Set ½ = O(logn/²2). • Let pe = 1/“strength” of edge e. • Cuts are preserved to within (1§²) and E[|F|] = O(n logn/²2)

  19. Fung-Hariharan-Harvey-Panigrahi ‘11 • Input: Graph G=(V,E), weights u : E !R+ • Output: A subgraph H=(V,F) with weights w : F!R+ • Choose parameter ½ • Compute probabilities { pe:e2E} • For i=1 to ½ • For each edge e2E • With probability pe, Add e to F Increase we by ue/(½pe) Can we dothis so that thecut values aretightly concentratedand E[|F|]=nlogO(1)n? Can approximateall values in O(m + n log n) time • Set ½ = O(log2n/²2). • Let pst = 1/(min cut separating s and t) • Cuts are preserved to within (1§²) and E[|F|] = O(n log2n/²2)

  20. Overview of Analysis Most cuts hit a huge number of edges) extremely concentrated )whp, most cuts are close to their mean

  21. Overview of Analysis Hits only one red edge) poorly concentrated Hits many red edges) highly concentrated Low samplingprobability High connectivity The same cut also hits many green edges) highly concentrated High samplingprobability Low connectivity

  22. Summary for Cut Sparsifiers • Do non-uniform sampling of edges,with probabilities based on “connectivity” • Decomposes graph into “connectivity classes” and argue concentration of all cuts • BK’96 used “strength” not “connectivity” • Can get sparsifiers with O(n log n / ²2) edges • Optimal for any independent sampling algorithm

  23. Spectral Sparsification • Input: Graph G=(V,E), weights u : E !R+ • Recall: xTLGx = st2Eust (xs-xt)2 • Goal: Find weights w : E !R+ such thatmost we are zero, and(1-²) xTLGx·e2EwexTLex·(1+²) xTLGx8x2RV , (1- ²) LG¹e2EweLe¹ (1+²) LG • General Problem: Given matrices Le satisfying eLe=LG, find coefficients we, mostly zero, such that (1-²) LG¹eweLe¹ (1+²) LG Call this xTLstx

  24. The General Problem:Sparsifying Sums of PSD Matrices • General Problem: Given PSD matrices Les.t. eLe=L, find coefficients we, mostly zero, such that (1-²) L¹eweLe¹ (1+²) L • Theorem:[Ahlswede-Winter ’02]Random sampling gives w with O( n log n/²2 ) non-zeros. • Theorem:[de Carli Silva-Harvey-Sato ‘11],building on [Batson-Spielman-Srivastava ‘09]Deterministic alg gives w with O( n/²2 ) non-zeros. • Cut & spectral sparsifiers with O(n/²2) edges [BSS’09] • Sparsifiers with more properties and O(n/²2) edges [dHS’11]

  25. Vector Case Vector Case • Vector problem: Given vectors ve2[0,1]ns.t. eve = v,find coefficients we, mostly zero, such thatkeweve - v k1 · ² • Theorem [Althofer ‘94, Lipton-Young ‘94]:There is a w with O(log n/²2) non-zeros. • Proof: Random sampling & Hoeffding inequality. • Multiplicative version: There is a w with O(n log n/²2) non-zeros such that (1-²) v·eweve· (1+²) v • General Problem: Given PSD matrices Les.t. eLe = L, find coefficients we, mostly zero, such that (1-²) L¹eweLe¹ (1+²) L

  26. Concentration Inequalities • Theorem: [Chernoff ‘52, Hoeffding ‘63]Let Y1,…,Yk be i.i.d. randomnon-negative real numberss.t. E[ Yi ] = Z andYi·uZ. Then • Theorem: [Ahlswede-Winter ‘02]Let Y1,…,Yk be i.i.d. randomPSD nxn matricess.t. E[ Yi ] = Z andYi¹uZ. Then The only difference

  27. “Balls & Bins” Example • Problem: Throw k balls into n bins. Want(max load) / (min load) · 1+². How big should k be? • AW Theorem: Let Y1,…,Yk be i.i.d. random PSD matricessuch that E[ Yi ] = Z andYi¹uZ. Then • Solution: Let Yi be all zeros, except for a single n in a random diagonal entry. Then E[ Yi ] = I, and Yi¹nI. Set k = £(n logn/²2).Whp, every diagonal entry of i Yi/k is in [1-²,1+²].

  28. Solving the General Problem • General Problem: Given PSD matrices Les.t. eLe = L, find coefficients we, mostly zero, such that (1-²) L¹eweLe¹ (1+²) L • AW Theorem: Let Y1,…,Yk be i.i.d. random PSD matricessuch that E[ Yi ] = Z andYi¹uZ. Then • To solve General Problem with O(nlogn/²2) non-zeros • Repeat k:=£(nlogn/²2) times • Pick an edge e with probability pe := Tr(LeLG-1)/n • Increment we by 1/k¢pe

  29. Derandomization • Vector problem: Given vectors ve2[0,1]ns.t. eve = v,find coefficients we, mostly zero, such thatkeweve - vk1 · ² • Theorem[Young ‘94]: The multiplicative weights method deterministically gives w with O(log n/²2) non-zeros • Or, use pessimistic estimators on the Hoeffding proof • General Problem: Given PSD matrices Les.t. eLe = L, find coefficients we, mostly zero, such that (1-²) L¹eweLe¹ (1+²) L • Theorem [de Carli Silva-Harvey-Sato ‘11]:The matrix multiplicative weights method (Arora-Kale ‘07)deterministically gives w with O(n log n/²2) non-zeros • Or, use matrix pessimistic estimators (Wigderson-Xiao ‘06)

  30. MWUM for “Balls & Bins” • Let ¸i = load in bin i. Initially ¸=0. Want: 1·¸i and ¸i·1. • Introduce penalty functions “exp(l-¸i)” and “exp(¸i-u)” • Find a bin ¸i to throw a ball into such that,increasing l by ±l and u by ±u, the penalties don’t grow. i exp(l+±l - ¸i’) · i exp(l-¸i) i exp(¸i’-(u+±u)) · i exp(¸i-u) • Careful analysis shows O(n log n/²2) balls is enough ¸ values: u l 0 1

  31. MMWUM for General Problem • Let A=0 and ¸ its eigenvalues. Want: 1·¸i and ¸i·1. • Use penalty functions Tr exp(lI-A) and Tr exp(A-uI) • Find a matrix Le such that adding ®Le to A,increasing l by ±l and u by ±u, the penalties don’t grow. Trexp((l+±l)I- (A+®Le)) · Trexp(lI-A) Trexp((A+®Le)-(u+±u)I) · Tr exp(A-uI) • Careful analysis shows O(n log n/²2) matrices is enough ¸ values: u l 0 1

  32. Beating Sampling & MMWUM • To get a better bound, try changing the penalty functions to be steeper! • Use penalty functions Tr (A-lI)-1 and Tr (uI-A)-1 • Find a matrix Le such that adding ®Le to A,increasing l by ±l and u by ±u, the penalties don’t grow. Tr((A+®Le)-(l+±l)I)-1· Tr(A-lI)-1 Tr((u+±u)I-(A+®Le))-1· Tr (uI-A)-1 All eigenvaluesstay within [l, u] ¸ values: u l 0 1

  33. Beating Sampling & MMWUM • To get a better bound, try changing the penalty functions to be steeper! • Use penalty functions Tr (A-lI)-1 and Tr (uI-A)-1 • Find a matrix Le such that adding ®Le to A,increasing l by ±l and u by ±u, the penalties don’t grow. Tr((A+®Le)-(l+±l)I)-1· Tr(A-lI)-1 Tr((u+±u)I-(A+®Le))-1· Tr (uI-A)-1 • General Problem: Given PSD matrices Les.t. eLe = L,find coefficients we, mostly zero, such that (1-²) L¹eweLe¹ (1+²) L • Theorem:[Batson-Spielman-Srivastava ‘09] in rank-1 case,[de Carli Silva-Harvey-Sato ‘11] for general caseThis gives a solution w with O( n/²2 ) non-zeros.

  34. Applications • Theorem:[de Carli Silva-Harvey-Sato ‘11]Given PSD matrices Les.t. eLe = L, there is analgorithm to find w with O( n/²2 ) non-zeros such that (1-²) L¹eweLe¹ (1+²) L • Application 1: Spectral Sparsifiers with CostsGiven costs on edges of G, can find sparsifier H whose cost isat most (1+²) the cost of G. • Application 2: Sparse SDP Solutionsmin { cTy : iyiAiº B, y¸0 } where Ai’s and B are PSDhas nearly optimal solution with O(n/²2) non-zeros.

  35. Open Questions • Sparsifiers for directed graphs • More constructions of sparsifiers with O(n/²2) edges. Perhaps randomized? • Iterative construction of expander graphs • More control of the weights we • A combinatorial proof of spectral sparsifiers • More applications of our general theorem

More Related