Loading in 2 Seconds...

Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees

Loading in 2 Seconds...

- 102 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees' - tarika

Download Now**An Image/Link below is provided (as is) to download presentation**
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

Download Now

Presentation Transcript

### Graph Sparsifiers byEdge-Connectivity andRandom Spanning Trees

Nick Harvey University of WaterlooDepartment of Combinatorics and Optimization

Joint work with Isaac Fung

TexPoint fonts used in EMF.

Read the TexPoint manual before you delete this box.: AAA

What are sparsifiers?

- Weighted subgraphs that approximately preserve some properties

[BSS’09]

- Approximating all cuts
- Sparsifiers: number of edges = O(n log n /²2) ,every cut approximated within 1+². [BK’96]
- O~(m) time algorithm to construct them
- Spectral approximation
- Spectral sparsifiers: number of edges = O(n log n /²2), “entire spectrum” approximated within 1+². [SS’08]
- O~(m) time algorithm to construct them

n = # vertices

Poly(n)

m = # edges

[BSS’09]

Laplacian matrix of G

Laplacian matrix of Sparsifier

Poly(n)

Why are sparsifiers useful?

- Approximating all cuts
- Sparsifiers: fast algorithms for cut/flow problem

v = flow value

Our Motivation

- BSS algorithm is very mysterious, and“too good to be true”
- Are there other methods to get sparsifierswith only O(n/²2) edges?
- Wild Speculation:Union of O(1/²2) random spanning trees gives a sparsifier(if weighted appropriately)
- True for complete graph [GRV ‘08]
- Corollary of our Main Result:The Wild Speculation is false, but the union of O(log2 n/²2)random spanning trees gives a sparsifier

Formal problem statement

- Design an algorithm such that
- Input: An undirected graph G=(V,E)
- Output: A weighted subgraphH=(V,F,w),where FµE and w : F !R
- Goals:
- | |±G(U)| - w(±H(U)) | ·² |±G(U)| 8U µ V

(We only want to preserve cuts)

- |F| = O(n log n / ²2)
- Running time = O~( m / ²2 )

# edges between U and V\U in G

weight of edges between U and V\U in H

- | |±(U)| - w(±(U)) | ·² |±(U)| 8U µ V

Sparsifying Complete Graph

- Sampling: Construct H by sampling every edge of Gwith probp=100 log n/n. Give each edge weight1/p.
- Properties of H:
- # sampled edges = O(n log n)
- |±G(U)| ¼ |±H(U)| 8U µ V
- So H is a sparsifier of G

Proof Sketch

- Consider any cut ±G(U) with |U|=k. Then |±G(U)|¸kn/2.
- Let Xe = 1 if edge e is sampled. Let X = e2CXe = |±H(U)|.
- Then ¹ = E[X] = p|±(U)| ¸ 50 k log n.
- Say cut fails if|X-¹| ¸¹/2.
- So Pr[ cut fails ] · 2 exp( - ¹/12 ) · n-4k.
- # of cuts with |U|=k is .
- So Pr[ any cut fails ] ·k n-4k < k n-3k < n-2.
- Whp, every U has ||±H(U)|-p|±(U)|| < p|±(U)|/2

Key Ingredients

Chernoff Bound

Bound on # small cuts

Union bound

Exponentially decreasingprobability of failure

Exponentially increasing # of bad events

Generalize to arbitrary G?

Eliminate most of these

- Can’t sample edges with same probability!
- Idea [BK’96]Sample low-connectivity edges with high probability, and high-connectivity edges with low probability

Keep this

Non-uniform sampling algorithm [BK’96]

- Input: Graph G=(V,E), parameters pe2 [0,1]
- Output: A weighted subgraph H=(V,F,w),where FµE and w : F !R

- For i=1 to ½
- For each edge e2E
- With probability pe, Add e to F Increase we by 1/(½pe)

- Main Question: Can we choose ½ and pe’sto achieve sparsification goals?

Non-uniform sampling algorithm [BK’96]

- Input: Graph G=(V,E), parameters pe2 [0,1]
- Output: A weighted subgraph H=(V,F,w),where FµE and w : F !R

- For i=1 to ½
- For each edge e2E
- With probability pe, Add e to F Increase we by 1/(½pe)

- Claim: H perfectly approximates G in expectation!
- For any e2E, E[ we ] = 1
- ) For every UµV, E[ w(±H(U)) ] = |±G(U)|
- Goal: Show every w(±H(U)) is tightly concentrated

Prior Work

Similar to edge connectivity

- Benczur-Karger ‘96
- Set ½ = O(log n), pe = 1/“strength” of edge e(max k s.t. e is contained in a k-edge-connected vertex-induced subgraph of G)
- All cuts are preserved
- epe·n ) |F| = O(n log n) (# edges in sparsifier)
- Running time is O(m log3 n)
- Spielman-Srivastava ‘08
- Set ½ = O(log n), pe = 1/“effective conductance” of edge e(view G as an electrical network where each edge is a 1-ohm resistor)
- H is a spectral sparsifier of G ) all cuts are preserved
- epe=n-1 ) |F| = O(n log n) (# edges in sparsifier)
- Running time is O(m log50 n)
- Uses “Matrix Chernoff Bound”

O(m log3 n)[Koutis-Miller-Peng ’10]

Our Work

- Fung-Harvey ’10 (independentlyHariharan-Panigrahi ‘10)
- Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
- Edge-connectivity¸ max { strength, effective conductance }
- epe·n ) |F| = O(n log2 n)
- Running time is O(m log2 n)
- Advantages:
- Edge connectivities natural, easy to compute
- Faster than previous algorithms
- Implies sampling by edge strength, effective resistances,or random spanning trees works
- Disadvantages:
- Extra log factor, no spectral sparsification

(min size of a cut that contains e)

Why?

Pr[ e 2 T ] = effective resistance of eedges are negatively correlated

)Chernoff bound still works

Our Work

- Fung-Harvey ’10 (independentlyHariharan-Panigrahi ‘10)
- Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
- Edge-connectivity¸ max { strength, effective conductance }
- epe·n ) |F| = O(n log2 n)
- Running time is O(m log2 n)
- Advantages:
- Edge connectivities natural, easy to compute
- Faster than previous algorithms
- Implies sampling by edge strength, effective resistances…
- Extra trick:Can shrink |F| to O(n log n) by using Benczur-Karger to sparsify our sparsifier!
- Running time is O(m log2 n) + O~(n)

(min size of a cut that contains e)

O(n log n)

Our Work

- Fung-Harvey ’10 (independentlyHariharan-Panigrahi ‘10)
- Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
- Edge-connectivity¸ max { strength, effective conductance }
- epe·n ) |F| = O(n log2 n)
- Running time is O(m log2 n)
- Advantages:
- Edge connectivities natural, easy to compute
- Faster than previous algorithms
- Implies sampling by edge strength, effective resistances…
- Panigrahi ’10
- A sparsifier with O(n log n /²2) edges, with running timeO(m) in unwtd graphs and O(m)+O~(n/²2) in wtd graphs

(min size of a cut that contains e)

Notation: kuv = min size of a cut separating u and v

- Main ideas:
- Partition edges into connectivity classesE = E1[E2[ ... Elog nwhere Ei = { e : 2i-1·ke<2i }
- Prove weight of sampled edges that each cuttakes from each connectivity class is about right
- Key point: Edges in ±(U)ÅEi have nearly same weight
- This yields a sparsifier

U

- C = ±(U) is a cut
- Ci= ±(U) ÅEi is a cut-induced set
- Need to prove:

Prove weight of sampled edges that each cuttakes from each connectivity class is about right

C2

C3

C1

C4

Notation:Ci= ±(U) ÅEi is a cut-induced set

Prove 8 cut-induced set Ci

- Key Ingredients
- Chernoff bound: Provesmall
- Bound on # small cuts: Prove #{ cut-induced sets Ciinduced by a small cut |C| }is small.
- Union bound: sum of failure probabilities is small,so probably no failures.

C2

C3

C1

C4

Counting Small Cut-Induced Sets

- Theorem: Let G=(V,E) be a graph. Fix any BµE.

Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)

Then, for every ®¸1,|{ ±(U) ÅB : |±(U)|·®K }| < n2®.

- Corollary: Counting Small Cuts[K’93]

Let G=(V,E) be a graph.

Let K be the edge-connectivity of G. (i.e., global min cut value)

Then, for every ®¸1,|{ ±(U) : |±(U)|·®K }| < n2®.

Comparison

(Slightly unfair)

- Theorem: Let G=(V,E) be a graph. Fix any BµE.

Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)

Then |{ ±(U) ÅB : |±(U)|·c }| < n2c/K8c¸1.

- Corollary [K’93]: Let G=(V,E) be a graph.

Let K be the edge-connectivity of G. (i.e., global min cut value)

Then, |{ ±(U) : |±(U)|·c }| < n2c/K 8c¸1.

- How many cuts of size 1?

Theorem says < n2, taking K=c=1.

Corollary, says < 1, because K=0.

Comparison

(Slightly unfair)

- Theorem: Let G=(V,E) be a graph. Fix any BµE.

Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)

Then |{ ±(U) ÅB : |±(U)|·c }| < n2c/K8c¸1.

- Corollary [K’93]: Let G=(V,E) be a graph.

Let K be the edge-connectivity of G. (i.e., global min cut value)

Then, |{ ±(U) : |±(U)|·c }| < n2c/K 8c¸1.

- Important point: A cut-induced set is a subset of edges. Many cuts can induce the same set.

±(U’)

±(U)

Algorithm for Finding a Min Cut [K’93]

- Input: A graph
- Output: A minimum cut (maybe)
- While graph has 2 vertices
- Pick an edge at random
- Contract it
- End While
- Output remaining edges

- Claim: For any min cut, this algorithm outputs it with probability ¸ 1/n2.
- Corollary: There are · n2 min cuts.

Replace edges {u,v} and {u’,v} with {u,u’}while preserving edge-connectivity

Between all vertices other than v

Finding a Small Cut-Induced Setv

v

u

u

u’

u’

- Input: A graph G=(V,E), and BµE
- Output: A cut-induced subset of B
- While graph has 2 vertices
- If some vertex v has no incident edges in B
- Split-off all edges at v and delete v
- Pick an edge at random
- Contract it
- End While
- Output remaining edges in B

Wolfgang Mader

- Claim: For any min cut-induced subset of B, this algorithm outputs it with probability >1/n2.
- Corollary: There are <n2 min cut-induced subsets of B

Sparsifiers from Random Spanning Trees

- Let H be union of ½=log2 n uniform random spanning trees,where we is 1/(½¢(effective resistance of e))
- Then all cuts are preserved and |F| = O(n log2 n)
- Why does this work?
- PrT[ e 2 T ] = effective resistance of edge e [Kirchoff 1847]
- Similar to usual independent sampling algorithm,with pe = effective resistance of e
- Key difference: edges in a random spanning tree arenot independent, but they are negatively correlated![BSST 1940]
- Chernoff bounds still work. [Panconesi, Srinivasan 1997]

Sparsifiers from Random Spanning Trees

- Let H be union of ½=log2 n uniform random spanning trees,where we is 1/(½¢(effective resistance of e))
- Then all cuts are preserved and |F| = O(n log2 n)
- How is this different than independent sampling?
- Consider an n-cycle. There are n/2 disjoint cuts of size 2.
- When ½=1, each cut has constant prob of having no edges

) need ½=(log n) to get a connected graph

- With random trees, get connectivity after just one tree
- Are O(1) trees are enough to preserve all cuts?
- No! ( log n ) trees are required

Conclusions

- Graph sparsifiers important for fast algorithms and some combinatorial theorems
- Sampling by edge-connectivities gives a sparsifierwith O(n log2 n) edges in O(m log2 n) time
- Improvements: O(n log n) edges in O(m) + O~(n) time[Panigrahi ‘10]
- Sampling by effective resistances also works

) sampling O(log2 n) random spanning trees gives a sparsifier

Questions

- Improve log2 n to log n?
- Sampling o(log n) random trees gives a sparsifier with o(log n) approximation?

Download Presentation

Connecting to Server..