1 / 38

Graph Partitioning and Embedding: A Perspective on Sparsity and SDP Approach

This talk explores the graph partitioning problem, its motivation, history, and the use of SDP approach to embed negative type metrics into L1. It also discusses the integrality gap and its instance in hypercube graphs.

palmeri
Download Presentation

Graph Partitioning and Embedding: A Perspective on Sparsity and SDP Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS Perspective SDP Based Approach for Graph Partitioning and Embedding Negative Type Metrics into L1 Math Perspective Parts I & II Subhash Khot (Georgia Tech) Nisheeth K. Vishnoi (IBM Research and Georgia Tech)

  2. S Sparsest Cut (SC) cut of minimum sparsity Sc b-Balanced Separator (BS) cut with |S|,|Sc| ≥ bn that minimizes |E(S,Sc)| |E(S,Sc)| ---------- |S| |Sc| Sparsity of a Cut CS Story: Sparsity b ~ ½

  3. Applications • Related Measures • Sparsity is often referred to as Graph Conductance • Edge expansion or isoperimetric constant • Applications • VLSI Layout • Clustering • Markov Chains • Geometric (Metric) Embeddings

  4. Estimating Sparsity Hard to compute exactly – compute “approximations” λ2 : spectral gap (second eigenvalue of the Laplacian) λ2(G)/n ≤ sparsity(G) ≤ 3/n √λ2(G)√∆ Not satisfactory, e.g. n-cycle Approx. Algo. :For any graph G on n vertices, compute a(G), which is within a mult. factor f(n)≥1 of sparsity of G. a(G) ≤ sparsity(G) ≤ a(G) f(n) f(n)=1 is hard What about f(n) = log n or even 10?

  5. History • Algorithms • Spectral Graph partitioning Alon-Milman ’85, Speilman-Teng ’96 (eigenvector based) • O(log n)Leighton-Rao ’88 (Linear Programming (LP) based) • O(log n)London-Linial-Rabinovitch ’94, Aumann-Rabani ’94 (connection to metric embeddings) • O(√log n)Arora-Rao-Vazirani ‘04 (Semi-Definite Programming (SDP) based) • Hardness • NP-Hard • Hard to approximate within any constant factor (assuming UGC) Chawla-Krauthgamer-Kumar-Rabani-Sivakumar ‘05, Khot-Vishnoi ‘05

  6. Outline of this Talk • Part I • Graph Partitioning • Motivation & History • SDP Approach • Embedding Negative Type Metrics into L1 • Metric Spaces and Embeddability • Cut Cone ≈ L1 • Negative Type Metrics as SDP Solutions • LLR/AR Connection • Part II • Integrality Gap instance • Hypercube and Cuts • Kahn-Kalai-Linial (isoperimetry of hypercube) • The Graph • The SDP Solution • 2. Conclusion

  7. Outline of this Talk • Part I • Graph Partitioning • Motivation & History • SDP Approach • Embedding Negative Type Metrics into L1 • Metric Spaces and Embeddability • Cut Cone ≈ L1 • Negative Type Metrics as SDP Solutions • LLR/AR Connection • Part II • Integrality Gap instance • Hypercube and Cuts • Kahn-Kalai-Linial (isoperimetry of hypercube) • The Graph • The SDP Solution • 2. Conclusion

  8. Quadratic Programfor BS Balanced Separator  i, vi  {-1,1} Minimize¼  |vi - vj |2 ij ε E  |vi - vj |2 = n2 i<j Quadratic Program Input:G(V,E) Output:(S,Sc) s.t. |S|,|Sc| = n/2 which minimizes |E(S,Sc)|

  9. SDP for Balanced Separator Quadratic Program SDP Relaxation  i, vi  {-1,1} Minimize¼  |vi - vj |2 ij ε E  |vi - vj |2 = n2 i<j  i, vi  Rn , ||vi||=1 Minimize ¼  || vi - vj ||2 ij  E Well-Separatedness  || vi - vj ||2 = n2 i<j

  10. Why is this a Relaxation ? • Relaxation • u be a unit vector and (S,Sc) • |S|, |Sc| = n/2 • For i є S, vi = u , i є Sc, vi = - u • ||vi-vj||2 = 4 ·δS(i,j) • Cost of solution = |E(S,Sc)| • sdp ≤ opt SDP Relaxation G(V,E)  i, vi  Rn , ||vi||=1 Minimize ¼  || vi - vj ||2 ij  E Well-Separatedness  || vi - vj ||2 = n2 i<j SDP can be computed in polynomial time! Boils down to the spectral approach. Nothing gained(?)

  11. Quadratic Programfor BS … Quadratic Program Balanced Separator  i, vi  {-1,1} Minimize¼  |vi - vj |2 ij ε E  |vi - vj |2 = n2 i<j Triangle Inequality • i, j , k, |vi - vj |2 + |vj - vk |2 |vi - vk|2 (redundant) Input:G(V,E) Output:(S,Sc) s.t. |S|,|Sc| = n/2 which minimizes |E(S,Sc)|

  12. SDP for Balanced Separator … Quadratic Program SDP Relaxation  i, vi  {-1,1} Minimize¼  |vi - vj |2 ij ε E  |vi - vj |2 = n2 i<j Triangle Inequality • i, j , k, |vi - vj |2+|vj - vk |2|vi - vk|2  i, vi  Rn , ||vi||=1 Minimize¼  || vi - vj ||2 ij  E Well-Separatedness  || vi - vj ||2 = n2 i<j Triangle Inequality  i, j , k, ||vi - vj||2 +||vj - vk||2 ||vi - vk||2 Still a relaxation …

  13. each step of length 1 vi ≤ 90o vj vk t-steps: length at-most √t Geometry of Triangle Ineq Rules out the embedding obtained by the spectral method!

  14. Integrality Gap: Upper Bound Arora-Rao-Vazirani ’04: O(√log n) for Sparsest Cut, Balanced Separator sdp within a factor of O(√log n) of the opt Integrality gap: max over all graphs on n vertices, the ratio of opt/sdp (as a function of n) ARV conjectured that the integrality gap is upper bounded by some constant (independent of n) Lack of any counterexample!

  15. Outline of this Talk • Part I • Graph Partitioning • Motivation & History • SDP Approach • Embedding Negative Type Metrics into L1 • Metric Spaces and Embeddability • Cut Cone ≈ L1 • Negative Type Metrics as SDP Solutions • LLR/AR Connection • Part II • Integrality Gap instance • Hypercube and Cuts • Kahn-Kalai-Linial (isoperimetry of hypercube) • The Graph • The SDP Solution • 2. Conclusion

  16. Math Story: Metric Embeddings • Metric is a distance function d on [n] x [n] s.t. d(i, j) + d(j, k)  d(i, k) (triangle inequality) • Metric d embeds into metric  with distortion   1 if there is a map φ s.t. i,j d(i, j)   (φ(i), φ(j))   d(i, j) (distances are preserved upto a factor of )

  17. Negative Type Metrics (squared-L2) • d on {1,2,…,n} is of negative type if there are vectors v1 ,v2 , … , vn • Such that d(i, j) = || vi - vj ||2 satisfies triangle inequality • Same as:  i, j , k, || vi - vj ||2 + || vj - vk ||2 || vi - v k||2 • NEG = class of such metrics. • arise as SDP solutions • L1  NEG

  18. Embedding NEG into L1 Conjecture: (Explicit by Goemans, Linial abt ’95) Every NEG metric embeds into L1 with O(1) (constant) distortion What’s the connection to sparsity ?

  19. S Sc Cuts and L1 Metrics Cut-Metrics on {1,2,…,n} δS(i, j) = 1 if i, j are separated by (S,Sc) = 0 otherwise pS : non-negative real for every subset S of [n] d(i,j) :=  pSδS(i,j) Fact: d is isometrically embeddable in L1 Further : Every L1 embeddable metric on n-points can be written as non-negative linear sum of cut-metrics on {1,…,n}

  20. Sparsest Cut ≈ Optimizing Over L1 i~jδS(i, j) ---------- i<jδS(i, j) i~jd(i, j) ---------- ii<j d(i, j) Minimize S  V = Minimize d is L1 [Aumann Rabani 98, Linial London Rabinovich ’94] • LP-relaxation over all METRICS • [Bourgain ’85]Every n-point metric embeds into L1 with O(log n) distortion • O(log n) factor approximation for Sparsity!

  21. Metric Embeddings & Sparsity • Optimizing over cuts ≈ Optimizing over L1 metrics • SDP solution ≈ Optimizing over NEG • Goemans-Linial/ARV Conjecture: NEG embeds in L1 with O(1) distortion/ Integrality Gap is O(1) • Implies O(1) approx. algo for estimating sparsity!

  22. Integrality Gap: Lower Bound • Sparsest Cut, Balanced Separator • log log n integrality gap instance • Khot-Vishnoi ’05, Krauthgamer-Rabani ‘06 • Devanur-Khot-Saket-Vishnoi ‘06 • Disproves the GL/ARV Conjecture • Previous best lower bound: 1.16 [Zatloukal ’04]

  23. Outline of this Talk • Part I • Graph Partitioning • Motivation & History • SDP Approach • Embedding Negative Type Metrics into L1 • Metric Spaces and Embeddability • Cut Cone ≈ L1 • Negative Type Metrics as SDP Solutions • LLR/AR Connection • Part II • Integrality Gap instance • Hypercube and Cuts • Kahn-Kalai-Linial (isoperimetry of hypercube) • The Graph • The SDP Solution • 2. Conclusion

  24. Outline of this Talk • Part I • Graph Partitioning • Motivation & History • SDP Approach • Embedding Negative Type Metrics into L1 • Metric Spaces and Embeddability • Cut Cone ≈ L1 • Negative Type Metrics as SDP Solutions • LLR/AR Connection • Part II • Integrality Gap instance • Hypercube and Cuts • Kahn-Kalai-Linial (isoperimetry of hypercube) • The Graph • The SDP Solution • 2. Conclusion

  25. Recall: Integrality Gap Lower Bound • Sparsest Cut, Balanced Separator • log log n integrality gap instance • Khot-Vishnoi ’05, Krauthgamer-Rabani ’06, • Devanur-Khot-Saket-Vishnoi ‘06 • Disproves the GL/ARV Conjecture • Previous best lower bound: 1.16 [Zatloukal ’04]

  26. Integrality Gap: Lower Bound • Thm: Construct graph G({1,…,n},E) and unit vector assignment • i -> viє Rn s.t. • G is an “expander”: every ¼ balanced cut has • Ω(|E|(log log n)/log n) edges (Kahn-Kalai-Linial) • “Low” SDP solution: O(|E|/log n) • Well-Separatedness: Σi<j ||vi-vj||2= n2 • Triangle inequality: d(i,j):=||vi-vj||2 is a metric Integrality gap: Ω(log log n)

  27. Starting Point: Hypercube(!) H={-1,1}k n = 2k (-1,-1,-1) (-1,1,-1) (-1,1,1) (-1,-1,1) (1,1,-1) (1,-1,-1) (1,1,1) (1,-1,1)

  28. Hypercube … • H={-1,1}kAdvantages? • Understand cuts in H : tools from Fourier Analysis • Vertex is a vector in Rk : starting point for SDP solution • But … • Hypercube has “small” balanced cuts : • coordinate cuts have 1/k fraction of the edges

  29. Cuts in Hypercube: Coordinate # of edges = |E(H)|/k Edges across pairs of vertices differing in i-th bit

  30. Cuts in Hypercube … decompose into coordinate cuts any balanced cut has a coord. cut which contributes E(H)/k2 edges

  31. Kahn-Kalai-Linial any balanced cut has a coordinate cut which contributes E(H) (log k)/k2 edges

  32. Increasing Size of Balanced Cuts consider balanced cuts in which coordinates are indistinguishable (w.r.t. to their contribution to the cut) each coordinate contributes equally:total E(H) (log k)/k edges can be achieved by symmetrizing the hypercube!

  33. e.g. 4-dim hypercube (1,1,1,1) (1,1,1,-1) (-1,1,1,1) (1,-1,1,1) (1,1,-1,1) (1,1,-1,-1) (-1,1,1,-1) (-1,-1,1,1) (1,-1,-1,1) (1,-1,1,-1) (-1,1,-1,1) (-1,-1,-1,1) (1,-1,-1,-1) (-1,1,-1,-1) (-1,-1,1,-1) (-1,-1,-1,-1)

  34. More Formally … H={-1,1}k with a rotation group acting on its coordinates Partitions H into equivalence classes V1,…,Vn Each Vi is a vertex. Edges are hypercube edges G(V,E), |E(G)|=|E(H)|, k ~ log n Balanced cuts in G correspond to balanced cuts in H KKL: any balanced cut (in H) has “a”coordinate cut which contributes E(H) (log k)/k2 edges to the cut. Group is transitive: “every” coordinate cut has the same contribution Any balanced cut in G has ≥ E(G) (log k)/k = E(G) (log log n)/(log n)

  35. Integrality Gap: Lower Bound • Thm: Construct graph G({1,…,n},E) and unit vector assignment • i -> viє Rn s.t. • G is an “expander”: every ¼ balanced cut has • Ω(|E|(log log n)/log n) edges (Kahn-Kalai-Linial) • “Low” SDP solution: O(|E|/log n) • Well-Separatedness: Σi<j ||vi-vj||2= n2 • Triangle inequality: d(i,j):=||vi-vj||2 is a metric Integrality gap: Ω(log log n)

  36. SDP Solution (1,1,1,1) (1,1,1,-1) (-1,1,1,1) (1,-1,1,1) (1,1,-1,1) (1,1,-1,-1) (-1,1,1,-1) (-1,-1,1,1) (1,-1,-1,1) (1,-1,1,-1) (-1,1,-1,1) (-1,-1,-1,1) (1,-1,-1,-1) (-1,1,-1,-1) (-1,-1,1,-1) (-1,-1,-1,-1)

  37. Formally: SDP Solution • Vertex: equivalence class: x1 , x2 , … , xk(rotations) • Vector: (1/√k) ·Σj xj • Observations: • Edge across two nodes differing in one bit • Contribution to sdp ~ 1/k. sdp ≤ |E(G)|/k = |E(G)|/(log n) • Triangle Inequality: (little bit of work and case analysis!) • For most classes {x1, … , xk} is “nearly orthogonal” • Hidden: Gram-Schmidt Orthogonalization, Tensoring, Well-separatedness

  38. Conclusion • (Simple) log log n integrality gap for SC/BS • Close the gap between log log n and log n ? • [Lee-Naor ’06, Cheeger-Kleiner ’06] Another counter-example. May give (log n)c Thank you!

More Related