1 / 41

Approximating the Cut-Norm

Approximating the Cut-Norm. Hubert Chan. “Approximating the Cut-Norm via Grothendieck’s Inequality” Noga Alon, Assaf Naor appearing in STOC ‘04. _. _. _. ++++. +. _. +. +. +. +. _. _. _. +. _. Problem Definition. Main Result. The problem is MAX SNP hard.

wayne
Download Presentation

Approximating the Cut-Norm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approximating the Cut-Norm Hubert Chan

  2. “Approximating the Cut-Norm via Grothendieck’s Inequality” Noga Alon, Assaf Naor appearing in STOC ‘04

  3. _ _ _ ++++ + _ + + + + _ _ _ + _ Problem Definition

  4. Main Result • The problem is MAX SNP hard. • Randomized polynomial algorithm that gives expected 0.56-approximation. For maximization problem, approximation ratio always less than 1. The authors showed a deterministic algorithm that gives 0.03-approximation. De-randomization: paper by Mahajan and Ramesh

  5. Road Map • Motivation • Hardness Result • General Approach • Outline of Algorithm • Conclusion

  6. Motivation • Inspired by the MAX-CUT problem Frieze and Kannan proposed decomposition scheme for solving problems on dense graphs • Estimating the norm of a matrix is a key step in the decomposition scheme

  7. Comparing with Previous result • Previously, computes norm with additive error • This is good only for a matrix whose norm is large. • The new algorithm approximates norm for all real matrices within constant factor 0.56 in expectation.

  8. Road Map • Motivation • Hardness Result • General Approach • Outline of Algorithm • Conclusion

  9. MAX-SNP A maximization problem is MAX-SNP hard if For example, there is a well-known polynomial algorithm for MAX-CUT that returns a cut with size at least 0.5 of the maximum cut. However, there is no polynomial algorithm that gives 16/17-approximation.

  10. MAX-CUT Graph G=(V,E) W V\W

  11. v e u u v e,1 1/4 -1/4 e,2 -1/4 1/4 The problem is MAX SNP hard • Reduction from MAX-CUT • Given graph G = (V,E), construct 2|E| x |V| matrix A: for each edge e = (u,v),

  12. u v 1/4 -1/4 _ + u v + 1/4 -1/4 _ -1/4 1/4 MAX-CUT · ||A||§ For e = (u,v) not in max cut, there is no contribution no matter what xe,1 and xe,2 are. For e = (u,v) in max cut, we can set xe,1 and xe,2 to give contribution 1.

  13. u v 1/4 -1/4 _ + u v + 1/4 -1/4 _ -1/4 1/4 MAX-CUT ¸ ||A||§ For e = (u,v) not in cut (W,V\W), there is no contribution no matter what xe,1 and xe,2 are. For e = (u,v) in cut (W, V\W), the contribution from rows e,1 and e,2 is at most 1.

  14. Road Map • Motivation • Hardness Result • General Approach • Outline of Algorithm • Conclusion

  15. Relaxation Schemes • Recall the problem: Linear Programming Relaxation? • Objective function not linear • Could introduce extra variables, but rounding might be tricky • How about Semidefinite Program Relaxation?

  16. Semidefinite Program Relaxation ||A||SDP = max ij aij ui vj ui² ui = 1 vj² vj = 1 where ui and vj are vectors in m+n dimensional Euclidean space

  17. Remarks about SDP ² Are (m+n)-dimensions sufficient? Yes, since any m+n vectors in a higher dimensional Euclidean space lie on an (m+n)-dimensional subspace. ² Fact: There exists an algorithm that given  > 0, returns solution vectors ui’s and vj’s that attains value at least ||A||SDP -  in time polynomial in the length of input and the logarithm of 1/.

  18. Are we done? We need to convert the vectors back to integers in {-1,1}! General strategy: 1. Obtain optimal vectors ui and vj for the SDP. 2. Use some randomized procedure to reconstruct integer solutions xi, yj2 {-1,1} from the vectors. • Give good expected bound:Find some constant  > 0 such that • E[ij aij xi yj] ¸ ||A||SDP¸ ||A||§

  19. Road Map • Motivation • Hardness Result • General Approach • Outline of Rounding Algorithm • Conclusion

  20. z Random Hyperplane + _ Recall we need to show: E[ij aij xi yj] ¸ij aij ui²vj

  21. u q v z Analyzing E[xy] Unit vectors u and v such that cos  = u²v A random unit vector z determines a hyperplane. Pr[u and v are separated] =  /  Set x = sign(u²z), y = sign(v²z). E[xy] = (1 -  / ) -  /  = 1 - 2  /  = 2/  (  / 2 - ) = 2/  arcsin(u²v)

  22. How do sine and arcsine relate? Is this good news?

  23. Performance Guarantee? • We have term by term constant factor approximation. • Bad news: cancellation because terms have different signs • Hence, we need global approximation.

  24. Generate standard, independent Gaussian random variables r1, r2, …, rm+n _ An Equivalent Way to Round Vectors + R R = (r1, r2, …, rm+n) Set xi= sign(ui²R), yj = sign(vj²R)

  25. What we would like to see…. This is impossible because arcsin is not a linear function.

  26. What we can prove…… where fi is a function depending on ui and gj is a function depending on vj. Important property of fi and gj: E[fi2] = E[gj2] =  /2 – 1 < 1.

  27. Inner Product and E[f g]

  28. Recall the SDP ||A||SDP = max ij aij ui vj ui² ui = 1 vj² vj = 1 where ui and vj are vectors in m+n dimensional Euclidean space Are (m+n)-dimensions sufficient? Yes, since any m+n vectors in a higher dimensional Euclidean space lie on an (m+n)-dimensional subspace.

  29. Wait a minute… We need unit vectors!

  30. Constant factor approximation

  31. What are functions f and g?

  32. Properties of Gaussian Measure (a) Mean 0, Variance 1 (b) Multi-dimensional Gaussian spherical symmetric

  33. 3. Relate E[xi yj] to ui²vj. Recap 1. Solve for optimal vectors ui and vj for the SDP. 2. Generate multi-dimensional Gaussian random vector R. Set xi = sign(ui²R), yj = sign(vj² R). 4. Use (1) ui and vj are optimal vectors and (2) E[fi gj] can be considered as an inner product. E[ij aij xi yj] ¸ 0.273 ||A||§

  34. What we would like to see…. This is impossible because arcsin is not a linear function.

  35. What if…

  36. If this is possible…. Recall z is the random unit vector.

  37. This is indeed possible!

  38. Another Semidefinite Program

  39. Better Constant Approximation

  40. Road Map • Motivation • Hardness Result • General Approach • Outline of Algorithm • Conclusion

  41. Main Ideas • Semidefinite Program Relaxation - a powerful tool for optimization problems • Randomized Rounding Scheme - random hyperplane - multi-dimensional Gaussian • Apply similar techniques directly to approximate MAX-CUT

More Related