1 / 56

Generic Rounding Schemes for SDP Relaxations

Generic Rounding Schemes for SDP Relaxations. Prasad Raghavendra Georgia Institute of Technology, Atlanta. ``Squish and Solve” Rounding Schemes [ R,Steurer 2009]. Rounding Schemes via Dictatorship Tests [R,2008]. Rounding SDP Hierarchies via Correlation

gema
Download Presentation

Generic Rounding Schemes for SDP Relaxations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Generic Rounding Schemesfor SDP Relaxations Prasad Raghavendra Georgia Institute of Technology, Atlanta

  2. ``Squish and Solve” Rounding Schemes[R,Steurer 2009] Rounding Schemes via Dictatorship Tests • [R,2008] Rounding SDP Hierarchies via Correlation [Barak,R,Steurer 2011] [R,Tan 2011]

  3. ``Squish and Solve” Rounding Schemes[R,Steurer 2009]

  4. Max Cut Max CUT Input: A weighted graph G Find: A Cut with maximum number/weight of crossing edges 10 15 7 1 1 3 Fraction of crossing edges

  5. MaxCut -1 1 Semidefinite Program [Goemans-Williamson 94] Variables : v1 , v2 … vn • |vi|2= 1 Maximize v2 Max Cut Problem Given a graph G, Find a cut that maximizes the number of crossing edges -1 10 1 -1 15 1 7 v1 v3 1 1 1 -1 -1 -1 3 Semidefinite Program: [Goemans-Williamson 94] Embedd the graph on the N - dimensional unit ball, Maximizing ¼ (Average Squared Length of the edges) -1 v5 v4

  6. MaxCut Rounding v2 • Cut the sphere by a random hyperplane, and output the induced graph cut. • A 0.878 approximation for the problem. • [Goemans-Williamson] v1 v3 v5 v4

  7. Squish and SOLVE ROUNDING

  8. Approximation using Finite Models 1 -1 10 approximate solution for = ¦-CSP Instance = 1 -1 15 7 -1 1 1 1 -1 1 variable folding unfolding of the assignment -1 (identifying variables) 3 -1 1 optimal solution for =finite ¦-CSP Instance =finite 1 constant time -1 Challenge: ensure =finite has a good solution

  9. Approximation using Finite Models PTAS for dense instances General Method for CSPs [Frieze-Kannan] For a dense instance =, it is possible to construct finite model =finite OPT(=finite) ≥ (1-ε) OPT(=) What we will do : SDP value (=finite) > (1-ε)SDP value (=)

  10. Analysis of Rounding Scheme ¦-CSP Instance = ¦-CSP Instance =finite SDP value > ® - ² SDP value ® unfolding 010001001 rounded value ¯ 010001001 OPT value ¯ Hence: rounding-ratio for = < (1+²) integrality-ratio for =finite

  11. Constructing FINITE MODELS (MAXCUT)

  12. STEP 1 : Dimension Reduction • Pick d = 1/ Є4 random Gaussian vectors {G1 , G2 , .. Gd} • Project the SDP solution along these directions. • Map vector V • V → V’ = (V∙G1 , V∙G2 , … V∙Gd) v2 v1 v3 v2 STEP 2 : Surgery Scale every vector V’ to unit length v2 v1 v3 • STEP 3 : Discretization • Pick an Є–netfor the • d dimensional sphere • Move every vertex to the nearest point in the Є–net v4 v5 v5 Constant dimensions v4 FINITE MODEL Graph on Є–net points

  13. To Show:SDP value (=finite) > (1-ε)SDP value (=) Johnson Lindenstrauss Lemma : “Distances are almost preserved under random projections” If V’,U’ are random projections of unit vectors U, V on 1/ ε4 directions, Pr [ |V∙U – V’∙U’| > ε] < ε2

  14. To Show:SDP value (=finite) > (1-ε)SDP value (=) For SDP value (=) Contribution of an edge e = (U,V) |U-V|2 = 2-2 V∙U SDP Vectors for =finite = Corresponding vectors in Є–net • STEP 1 : Dimension Reduction • Project the SDP solution along 1/ Є4 random directions. STEP 1 With probability > 1- Є2 , | |U-V|2 - |U’-V’|2 | < 2Є STEP 2 With probability > 1- 2Є2 , 1+ Є< |V’|2 ,|U’|2 < 1- Є, Normalization changes distance by at most 2Є STEP 2 : Surgery Scale every vector V’ to unit length • STEP 3 : Discretization • Pick an Є–netfor the • d dimensional sphere • Move every vertex to the nearest point in the Є–net STEP 3 Changes edge length by at most 2Є

  15. To Show:SDP value (=finite) > (1-ε)SDP value (=) For SDP value (=) Contribution of an edge e = (U,V) |U-V|2 = 2-2 V∙U SDP Vectors for =finite = Corresponding vectors in Є–net ANALYSIS With probability 1-3Є2, The contribution of edge e changes by < 6Є In expectation, For (1-3Є2) edges, the contribution of edge e changes by < 6Є SDP value (=finite) > SDP value (=) - 6Є – 3Є2 STEP 1 With probability > 1- Є2 , | |U-V|2 - |U’-V’|2 | < 2Є STEP 2 With probability > 1- 2Є2 , 1+ Є< |V’|2 ,|U’|2 < 1- Є, Normalization changes distance by at most 2Є STEP 3 Changes edge length by at most 2Є

  16. Generic Rounding For CSPs [Raghavendra Steurer08] For any CSP ¦and any ²>0, there exists an efficient algorithm A, (1-²) integrality gap of a natural SDP ( ¦) (SDP is optimal under UGC) rounding – ratioA ( ¦ ) (approximation ratio) = ≥ • Drawbacks • Running Time(A) • On CSP over alphabet size q, arity k • No explicit approximation ratio Unifies a large number of existing rounding schemes, and the resulting algorithm A as good as all known algorithms for CSPs (without dependence on n)

  17. Computing Integrality Gaps Theorem: For any CSP ¦and any ²>0, there exists an algorithm A to compute integrality gap (¦) within an accuracy ² Run through all instances of size exp(poly(k,q,1/²) Running Time(A) On CSP over alphabet size q, arity k

  18. Rounding Schemes via Dictatorship Tests • [R,2008]

  19. Given a function • F : {-1,1}R {-1,1} • Toss random coins • Make a few queries to F • Output either ACCEPT or REJECT Dictatorship Test F is a dictator function F(x1 ,… xR) = xi F is far from every dictator function (No influential coordinate) Pr[ACCEPT ] = Completeness Pr[ACCEPT ] = Soundness

  20. UG Hardness Rule of Thumb: [Khot-Kindler-Mossel-O’Donnell] A dictatorship test where • Completeness = c and Soundness = αc • the verifier’s tests are predicates from a CSP Λ It is UG-hard to approximate CSP Λto a factor better than α

  21. A Dictatorship Test for Maxcut A dictatorship test is a graph G on the hypercube. A cut gives a function F on the hypercube Completeness Value of Dictator Cuts F(x) = xi Soundness The maximum value attained by a cut far from a dictator Hypercube = {-1,1}100

  22. Overview Completeness Value of Dictator Cuts = SDP Value (G) Graph G SDP Solution Soundness Given a cut far from every dictator : It gives a cut on graph G with the same value. v2 10 15 7 v1 v3 1 1 • Rounding Scheme: • Construct the dictatorship test gadget from graph G • Try all possible cuts far from dictator, and obtain a cut back in the graph G. 3 v5 Guarantee: Algorithm’s Output Value ≥ Soundness of the Dictatorship Test Gadget 100 dimensional hypercube v4

  23. UG Hardness UG Hardness “On instances, with value C, it is NP-hard to output a solution of value S, assuming UGC” Dictatorship Test Completeness C Soundness S [KKMO] In our case, Completeness = SDP Value (G) Soundness < Algorithm’s Output Cant get better approximation assuming UGC!

  24. The Goal Completeness Value of Dictator Cuts = SDP Value (G) Graph G SDP Solution v2 10 Soundness Given a cut far from every dictator : It gives a cut on graph G with the same value. 15 7 v1 v3 1 1 3 v5 100 dimensional hypercube v4

  25. Influences Definition: Influence of the ith co-ordinate on a function F:{0,1}R  [-1,1] under a product distributionμRis defined as: Infiμ (F) = E [ Variance [F] ] over changing the ith coordinate as per μ Random Fixing of All Other Coordinates from μR-1 (For the ith dictator function : Infiμ (F) is as large as variance of F) Definition: A function is τ-quasirandomif for all i, Infiμ (F) ≤ τ

  26. Dimension Reduction Max Cut SDP: Embed the graph on the N - dimensional unit ball, Maximizing ¼ (Average Squared Length of the edges) v2 v2 100 v1 v1 v3 v3 v5 v5 Project to random 1/ Є2 dimensional space. Constant dimensional hyperplane v4 v4 New SDP Value = Old SDP Value + or - Є

  27. Making the Instance Harder SDP Value = Average Squared Length of an Edge v2 v2 v2 v2 v2 v1 v1 v1 v1 v1 v3 v3 v3 v3 v3 • Transformations • Rotation does not change the SDP value. • Union of two rotations has the same SDP value v5 v5 v5 v5 v5 Sphere Graph H : Union of all possible rotations of G. v4 v4 v4 v4 v4 SDP Value (Graph G) = SDP Value ( Sphere Graph H)

  28. Making the Instance Harder v2 v2 v2 v2 v2 v2 MaxCut (H) = S v1 v1 v1 v1 v1 v1 v3 v3 v3 v3 v3 v3 MaxCut (G) ≥ S Pick a random rotation of G and read the cut induced on it. Thus, v5 v5 v5 v5 v5 v5 MaxCut (H) ≤ MaxCut(G) v4 v4 v4 v4 v4 v4 SDP Value (G) = SDP Value (H)

  29. Hypercube Graph For each edge e, connect every pair of vertices in hypercube separated by the length of e SDP Solution v2 Generate Edges of Expected Squared Length = d 1) Starting with a random x Є {-1,1}100 , 1) Generate y by flipping each bit of x with probability d/4 Output (x,y) v1 v3 v5 v4 100 dimensional hypercube : {-1,1}100

  30. Dichotomy of Cuts 1 1 1 1 A cut gives a function F on the hypercube F : {-1,1}100 -> {-1,1} -1 Dictator Cuts F(x) = xi Cuts Far From Dictators (influence of each coordinate on function F is small) -1 -1 Hypercube = {-1,1}100

  31. v2 Dictator Cuts X v1 v For each edge e = (u,v), connect every pair of vertices in hypercube separated by the length of e v5 u Y 100 dimensional hypercube Pick an edge e = (u,v), consider all edges in hypercube corresponding to e Number of bits in which X,Y differ = |u-v|2/4 Fraction of red edges cut by horizontal dictator . Fraction of dictators that cut one such edge (X,Y) = = Fraction of edges cut by dictator = ¼ Average Squared Distance Value of Dictator Cuts = SDP Value (G)

  32. -1 Cuts far from Dictators -1 -1 1 v2 v2 v2 1 1 100 dimensional hypercube v1 v1 v1 v3 v3 v3 v2 Intuition: Sphere graph : Uniform on all directions Hypercube graph : Axis are special directions If a cut does not respect the axis, then it should not distinguish between Sphere and Hypercube graphs. v1 v3 v5 v5 v5 v5 v4 v4 v4 v4

  33. The Invariance Principle Central Limit Theorem ``Sum of large number of {-1,1} random variables has similar distribution as Sum of large number of Gaussian random variables.” Invariance Principle for Low Degree Polynomials [Rotar] [Mossel-O’Donnell-Oleszkiewich], [Mossel 2008] “If a low degree polynomial F has no influential coordinate, then F({-1,1}n) and F(Gaussian) have similar distribution.”

  34. Hypercube vs Sphere H P : sphere -> Nearly {-1,1} is the multilinear extension of F F:{-1,1}100 -> {-1,1} is a cut far from every dictator. By Invariance Principle, MaxCut value of F on hypercube ≈Maxcut value of P on Sphere graph H

  35. Rounding SDP Hierarchies via Correlation [Barak,R,Steurer 2011] [R,Tan 2011]

  36. The Unique Games Barrier It is Unique Games-Hard to approximate to a factor better than that given by Simple SDP Relaxation for • Constraint Satisfaction Problems [R08] • Metric Labelling Problems • [Manokaran-Naor-R.-Schwartz 08] • Ordering Constraint Satisfaction Problems • [Guruswami-Hastad-Manokaran-R. ] • Kernel Clustering Problems [KhotNaor 09] • Grothendieck Problem [R.-Steurer 09] • Monotone-Hard-Constraint CSPs • [Kumar-Manokaran-Tulsiani-Vishnoi]

  37. For the non-believers [R-Steurer 09] Unconditionally, Adding all valid constraints on at most 2^O((loglogn)1/4) variables to the simple SDP does not improve the approximation ratio for Constraint Satisfaction Problems Metric Labelling Problems Ordering Constraint Satisfaction Problems Kernel Clustering Problems Grothendieck Problem

  38. Stronger SDP Relaxations Possibility: ``Certain Strong SDP Relaxations yield better approximations and disprove the Unique Games Conjecture” (five rounds of Lasserre hierarchy) Even Otherwise: For what problems do these relaxations help? How does one use these stronger SDP relaxations?

  39. Difficulty . Successes of Stronger SDP Relaxations: • [Arora-Rao-Vazirani] used an SDP with triangle inequalities to improve approximation for Sparsest Cut from log n to sqrt(log n). • Stronger SDPs for better approximations for graph and hypergraph independent set in [Chlamtac] [Arora-Charikar-Chlamtac] [Chlamtac-Singh] Very few general techniques to extract the power of stronger SDP relaxations.

  40. Semidefinite Program Variables : v1 , v2 … vn • | vi |2= 1 Maximize SDP for MaxCut Quadratic Program Variables : x1 , x2 … xn xi = 1 or -1 Maximize -1 1 -1 10 1 -1 15 1 7 1 1 1 -1 -1 -1 3 Relax all the xi to be unit vectors instead of {1,-1}. All products are replaced by inner products of vectors Ideally, these vectors are convex combination of integral solutions. -- the SDP can be thought of as a distribution over cuts -1 Instead, we force vectors to look like integral solutions locally (on every k vertices)

  41. k-round Lasserre-SDP for MaxCut Local distribution μS For any subset S of k vertices, A local distribution μS over {+1,-1} assignments to the set S -1 1 -1 10 1 -1 15 X1 X2 X3 X4 …………….. X15 …………………. Xn 1 7 1 Conditioned SDP Vectors {vi|Sα} For any subset S of k vertices, and an assignment α in {-1,1}k , An SDP solution {vi|Sα} corresponding to the SDP solution conditioned on S being assigned α 1 1 1 -1 1 -1 …………….. 1 1 1 -1 1 1 - 1 -1 -1 -1 1 -1 -1 -1 …………….. 1 1 1 -1 1 1 1 3 1 -1 -1 -1 …………….. 1 1 1 -1 1 1 - 1 -1 1 -1 1 -1 …………….. 1 1 1 -1 1 1 - 1 -…………………………………………………………………… 1 1 1 -1 …………….. 1 1 1 -1 1 1 - 1 1 1 1 -1 …………….. 1 1 1 -1 1 1 - 1

  42. Correlations Correlation: `` Two random variables are correlated, if the fixing the value of one changes the distribution of the other’’ Measuring Correlation: Mutual information between the two random variables. Entropy of X Conditional Entropy of X|Y Mutual Information I(X,Y) = H(X) - H(X|Y)

  43. Global Correlation Global Correlation is the average correlation between random pairs of vertices in the instance. GC = E {a,b}[ I(Xa , Xb) ] Crucial Observation Conditioning the SDP solution on the value of a random vertex Xa reduces average entropy by GC Proof: average entropy = E{b} H(Xb) average entropy after conditioning Xa = E{a} [E{b} H(Xb | Xa)] Hence the decrease is E{b} H(Xb) - E{a} [E{b} H(Xb | Xa)] = E{a,b} [H(Xb)- H(Xb | Xa)] = E{a,b} [I(Xb , Xa)]

  44. Progress By Global Correlations Suppose an SDP solution has global correlation > ε, Then we sample and condition on the value of a random vertex, Average entropy drops by ε If global correlation always remains > ε, then after 1/ ε conditionings, the average entropy ≈ 0  The variables are almost frozen, and the conditioned SDP solution is nearly integral. Corollary Within O(1/ ε) conditionings, the global correlation of the SDP solution becomes < ε

  45. Application: Max Bisection Max Bisection Input: A weighted graph G Find: A Cut with maximum number/weight of crossing edges with exactly ½ of the vertices on each side of the cut. 10 15 7 1 1 3

  46. Cut the sphere by a random hyperplane, and output the induced graph cut. Halfspace Rounding? v2 The expected fraction of vertices on each side of the cut is half. However, the actual number of vertices might always be away from half -- no concentration v1 v3 v5 Independence among random variables  Concentration (Ex: Chernoff bounds) v4 Lack of concentration  lack of independence

  47. Bounding Variance Let Z1 , Z2 , .. Zn denote the random projections, Suppose the rounding function is F : R  [0,1] Fraction of vertices on one side of the cut = E{a} [F(Za)] Variance of this random variable= EZ [ E{a,b}[F(Za)F(Zb)] - E{a} [F(Za)]E{b}[F(Zb)] ] = E{a,b} [ Covariance(F(Za), F(Zb)) ] Low global correlation  E{a,b} [I(Za,Zb)] is small  the above variance is small.

  48. CSPs with Global Cardinality Constraint [R, Tan 2011] Given an instance of Max Bisection/Min Bisection with value 1-ε, there is an algorithm running in time npoly(1/ε) that finds a solution of value 1-O(ε1/2) [R, Tan 2011] For every CSP with global cardinality constraint, there is a corresponding dictatorship test whose Soundness/Completeness = Integrality gap of poly(1/ε) - round Lasserre SDP.

  49. Another Application: 2-CSPs on ``expanding instances” Locally, the constraints of the CSP introduce correlations among the variables. If the graph is a sufficiently good expander, these local correlations must translate in to global correlations. -1 1 -1 10 1 -1 15 1 7 1 1 1 -1 -1 -1 3 -1

  50. Low-Rank Graphs -1 1 If the adjacency matrix of the graph is “low rank” – approximated by few eigen vectors. -1 10 1 -1 15 1 7 1 1 1 -1 -1 -1 3 -1 • Lemma: If the number of eigen values> δis less than d, then an SDP solution with local correlation > δhas global correlation O(1/d2)

More Related