graph sparsifiers by edge connectivity and random spanning trees n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees PowerPoint Presentation
Download Presentation
Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees

Loading in 2 Seconds...

play fullscreen
1 / 25

Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees - PowerPoint PPT Presentation


  • 109 Views
  • Uploaded on

Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees. Nick Harvey University of Waterloo Department of Combinatorics and Optimization Joint work with Isaac Fung. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A. What are sparsifiers ?.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees' - tarika


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
graph sparsifiers by edge connectivity and random spanning trees

Graph Sparsifiers byEdge-Connectivity andRandom Spanning Trees

Nick Harvey University of WaterlooDepartment of Combinatorics and Optimization

Joint work with Isaac Fung

TexPoint fonts used in EMF.

Read the TexPoint manual before you delete this box.: AAA

what are sparsifiers
What are sparsifiers?
  • Weighted subgraphs that approximately preserve some properties

[BSS’09]

  • Approximating all cuts
    • Sparsifiers: number of edges = O(n log n /²2) ,every cut approximated within 1+². [BK’96]
    • O~(m) time algorithm to construct them
  • Spectral approximation
    • Spectral sparsifiers: number of edges = O(n log n /²2), “entire spectrum” approximated within 1+². [SS’08]
    • O~(m) time algorithm to construct them

n = # vertices

Poly(n)

m = # edges

[BSS’09]

Laplacian matrix of G

Laplacian matrix of Sparsifier

Poly(n)

why are sparsifiers useful

n = # verticesm = # edges

Why are sparsifiers useful?
  • Approximating all cuts
    • Sparsifiers: fast algorithms for cut/flow problem

v = flow value

our motivation
Our Motivation
  • BSS algorithm is very mysterious, and“too good to be true”
  • Are there other methods to get sparsifierswith only O(n/²2) edges?
  • Wild Speculation:Union of O(1/²2) random spanning trees gives a sparsifier(if weighted appropriately)
    • True for complete graph [GRV ‘08]
  • Corollary of our Main Result:The Wild Speculation is false, but the union of O(log2 n/²2)random spanning trees gives a sparsifier
formal problem statement

n = # verticesm = # edges

Formal problem statement
  • Design an algorithm such that
  • Input: An undirected graph G=(V,E)
  • Output: A weighted subgraphH=(V,F,w),where FµE and w : F !R
  • Goals:
    • | |±G(U)| - w(±H(U)) | ·² |±G(U)| 8U µ V

(We only want to preserve cuts)

    • |F| = O(n log n / ²2)
    • Running time = O~( m / ²2 )

# edges between U and V\U in G

weight of edges between U and V\U in H

  • | |±(U)| - w(±(U)) | ·² |±(U)| 8U µ V
sparsifying complete graph
Sparsifying Complete Graph
  • Sampling: Construct H by sampling every edge of Gwith probp=100 log n/n. Give each edge weight1/p.
  • Properties of H:
    • # sampled edges = O(n log n)
    • |±G(U)| ¼ |±H(U)| 8U µ V
  • So H is a sparsifier of G
proof sketch
Proof Sketch
  • Consider any cut ±G(U) with |U|=k. Then |±G(U)|¸kn/2.
  • Let Xe = 1 if edge e is sampled. Let X = e2CXe = |±H(U)|.
  • Then ¹ = E[X] = p|±(U)| ¸ 50 k log n.
  • Say cut fails if|X-¹| ¸¹/2.
  • So Pr[ cut fails ] · 2 exp( - ¹/12 ) · n-4k.
  • # of cuts with |U|=k is .
  • So Pr[ any cut fails ] ·k n-4k < k n-3k < n-2.
  • Whp, every U has ||±H(U)|-p|±(U)|| < p|±(U)|/2

Key Ingredients

Chernoff Bound

Bound on # small cuts

Union bound

Exponentially decreasingprobability of failure

Exponentially increasing # of bad events

generalize to arbitrary g
Generalize to arbitrary G?

Eliminate most of these

  • Can’t sample edges with same probability!
  • Idea [BK’96]Sample low-connectivity edges with high probability, and high-connectivity edges with low probability

Keep this

non uniform sampling algorithm bk 96
Non-uniform sampling algorithm [BK’96]
  • Input: Graph G=(V,E), parameters pe2 [0,1]
  • Output: A weighted subgraph H=(V,F,w),where FµE and w : F !R
  • For i=1 to ½
  • For each edge e2E
  • With probability pe, Add e to F Increase we by 1/(½pe)
  • Main Question: Can we choose ½ and pe’sto achieve sparsification goals?
non uniform sampling algorithm bk 961
Non-uniform sampling algorithm [BK’96]
  • Input: Graph G=(V,E), parameters pe2 [0,1]
  • Output: A weighted subgraph H=(V,F,w),where FµE and w : F !R
  • For i=1 to ½
  • For each edge e2E
  • With probability pe, Add e to F Increase we by 1/(½pe)
  • Claim: H perfectly approximates G in expectation!
  • For any e2E, E[ we ] = 1
    • ) For every UµV, E[ w(±H(U)) ] = |±G(U)|
  • Goal: Show every w(±H(U)) is tightly concentrated
prior work

Assume ² is constant

Prior Work

Similar to edge connectivity

  • Benczur-Karger ‘96
    • Set ½ = O(log n), pe = 1/“strength” of edge e(max k s.t. e is contained in a k-edge-connected vertex-induced subgraph of G)
    • All cuts are preserved
    • epe·n ) |F| = O(n log n) (# edges in sparsifier)
    • Running time is O(m log3 n)
  • Spielman-Srivastava ‘08
    • Set ½ = O(log n), pe = 1/“effective conductance” of edge e(view G as an electrical network where each edge is a 1-ohm resistor)
    • H is a spectral sparsifier of G ) all cuts are preserved
    • epe=n-1 ) |F| = O(n log n) (# edges in sparsifier)
    • Running time is O(m log50 n)
    • Uses “Matrix Chernoff Bound”

O(m log3 n)[Koutis-Miller-Peng ’10]

our work

Assume ² is constant

Our Work
  • Fung-Harvey ’10 (independentlyHariharan-Panigrahi ‘10)
    • Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
    • Edge-connectivity¸ max { strength, effective conductance }
    • epe·n ) |F| = O(n log2 n)
    • Running time is O(m log2 n)
    • Advantages:
      • Edge connectivities natural, easy to compute
      • Faster than previous algorithms
      • Implies sampling by edge strength, effective resistances,or random spanning trees works
    • Disadvantages:
      • Extra log factor, no spectral sparsification

(min size of a cut that contains e)

Why?

Pr[ e 2 T ] = effective resistance of eedges are negatively correlated

)Chernoff bound still works

our work1

Assume ² is constant

Our Work
  • Fung-Harvey ’10 (independentlyHariharan-Panigrahi ‘10)
    • Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
    • Edge-connectivity¸ max { strength, effective conductance }
    • epe·n ) |F| = O(n log2 n)
    • Running time is O(m log2 n)
    • Advantages:
      • Edge connectivities natural, easy to compute
      • Faster than previous algorithms
      • Implies sampling by edge strength, effective resistances…
  • Extra trick:Can shrink |F| to O(n log n) by using Benczur-Karger to sparsify our sparsifier!
    • Running time is O(m log2 n) + O~(n)

(min size of a cut that contains e)

O(n log n)

our work2

Assume ² is constant

Our Work
  • Fung-Harvey ’10 (independentlyHariharan-Panigrahi ‘10)
    • Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
    • Edge-connectivity¸ max { strength, effective conductance }
    • epe·n ) |F| = O(n log2 n)
    • Running time is O(m log2 n)
    • Advantages:
      • Edge connectivities natural, easy to compute
      • Faster than previous algorithms
      • Implies sampling by edge strength, effective resistances…
  • Panigrahi ’10
    • A sparsifier with O(n log n /²2) edges, with running timeO(m) in unwtd graphs and O(m)+O~(n/²2) in wtd graphs

(min size of a cut that contains e)

slide15

Notation: kuv = min size of a cut separating u and v

  • Main ideas:
    • Partition edges into connectivity classesE = E1[E2[ ... Elog nwhere Ei = { e : 2i-1·ke<2i }
    • Prove weight of sampled edges that each cuttakes from each connectivity class is about right
    • Key point: Edges in ±(U)ÅEi have nearly same weight
    • This yields a sparsifier

U

slide16

Notation:

    • C = ±(U) is a cut
    • Ci= ±(U) ÅEi is a cut-induced set
  • Need to prove:

Prove weight of sampled edges that each cuttakes from each connectivity class is about right

C2

C3

C1

C4

slide17

Notation:Ci= ±(U) ÅEi is a cut-induced set

Prove 8 cut-induced set Ci

  • Key Ingredients
    • Chernoff bound: Provesmall
    • Bound on # small cuts: Prove #{ cut-induced sets Ciinduced by a small cut |C| }is small.
    • Union bound: sum of failure probabilities is small,so probably no failures.

C2

C3

C1

C4

counting small cut induced sets
Counting Small Cut-Induced Sets
  • Theorem: Let G=(V,E) be a graph. Fix any BµE.

Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)

Then, for every ®¸1,|{ ±(U) ÅB : |±(U)|·®K }| < n2®.

  • Corollary: Counting Small Cuts[K’93]

Let G=(V,E) be a graph.

Let K be the edge-connectivity of G. (i.e., global min cut value)

Then, for every ®¸1,|{ ±(U) : |±(U)|·®K }| < n2®.

comparison
Comparison

(Slightly unfair)

  • Theorem: Let G=(V,E) be a graph. Fix any BµE.

Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)

Then |{ ±(U) ÅB : |±(U)|·c }| < n2c/K8c¸1.

  • Corollary [K’93]: Let G=(V,E) be a graph.

Let K be the edge-connectivity of G. (i.e., global min cut value)

Then, |{ ±(U) : |±(U)|·c }| < n2c/K 8c¸1.

  • How many cuts of size 1?

Theorem says < n2, taking K=c=1.

Corollary, says < 1, because K=0.

comparison1
Comparison

(Slightly unfair)

  • Theorem: Let G=(V,E) be a graph. Fix any BµE.

Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)

Then |{ ±(U) ÅB : |±(U)|·c }| < n2c/K8c¸1.

  • Corollary [K’93]: Let G=(V,E) be a graph.

Let K be the edge-connectivity of G. (i.e., global min cut value)

Then, |{ ±(U) : |±(U)|·c }| < n2c/K 8c¸1.

  • Important point: A cut-induced set is a subset of edges. Many cuts can induce the same set.

±(U’)

±(U)

algorithm for finding a min cut k 93
Algorithm for Finding a Min Cut [K’93]
  • Input: A graph
  • Output: A minimum cut (maybe)
  • While graph has  2 vertices
    • Pick an edge at random
    • Contract it
  • End While
  • Output remaining edges
  • Claim: For any min cut, this algorithm outputs it with probability ¸ 1/n2.
  • Corollary: There are · n2 min cuts.
finding a small cut induced set

Splitting Off

Replace edges {u,v} and {u’,v} with {u,u’}while preserving edge-connectivity

Between all vertices other than v

Finding a Small Cut-Induced Set

v

v

u

u

u’

u’

  • Input: A graph G=(V,E), and BµE
  • Output: A cut-induced subset of B
  • While graph has  2 vertices
    • If some vertex v has no incident edges in B
      • Split-off all edges at v and delete v
    • Pick an edge at random
    • Contract it
  • End While
  • Output remaining edges in B

Wolfgang Mader

  • Claim: For any min cut-induced subset of B, this algorithm outputs it with probability >1/n2.
  • Corollary: There are <n2 min cut-induced subsets of B
sparsifiers from random spanning trees
Sparsifiers from Random Spanning Trees
  • Let H be union of ½=log2 n uniform random spanning trees,where we is 1/(½¢(effective resistance of e))
  • Then all cuts are preserved and |F| = O(n log2 n)
  • Why does this work?
    • PrT[ e 2 T ] = effective resistance of edge e [Kirchoff 1847]
    • Similar to usual independent sampling algorithm,with pe = effective resistance of e
    • Key difference: edges in a random spanning tree arenot independent, but they are negatively correlated![BSST 1940]
    • Chernoff bounds still work. [Panconesi, Srinivasan 1997]
sparsifiers from random spanning trees1
Sparsifiers from Random Spanning Trees
  • Let H be union of ½=log2 n uniform random spanning trees,where we is 1/(½¢(effective resistance of e))
  • Then all cuts are preserved and |F| = O(n log2 n)
  • How is this different than independent sampling?
    • Consider an n-cycle. There are n/2 disjoint cuts of size 2.
    • When ½=1, each cut has constant prob of having no edges

) need ½=(log n) to get a connected graph

    • With random trees, get connectivity after just one tree
    • Are O(1) trees are enough to preserve all cuts?
    • No! ( log n ) trees are required
conclusions
Conclusions
  • Graph sparsifiers important for fast algorithms and some combinatorial theorems
  • Sampling by edge-connectivities gives a sparsifierwith O(n log2 n) edges in O(m log2 n) time
    • Improvements: O(n log n) edges in O(m) + O~(n) time[Panigrahi ‘10]
  • Sampling by effective resistances also works

) sampling O(log2 n) random spanning trees gives a sparsifier

Questions

  • Improve log2 n to log n?
  • Sampling o(log n) random trees gives a sparsifier with o(log n) approximation?