ugc sdp or are there any more polynomial time algorithms n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
UGC & SDP, or, Are there any more polynomial time algorithms? PowerPoint Presentation
Download Presentation
UGC & SDP, or, Are there any more polynomial time algorithms?

Loading in 2 Seconds...

play fullscreen
1 / 26

UGC & SDP, or, Are there any more polynomial time algorithms? - PowerPoint PPT Presentation


  • 117 Views
  • Uploaded on

UGC & SDP, or, Are there any more polynomial time algorithms? . Ryan O’Donnell Microsoft Research. P. / BPP. NP -hard. 3SAT [Cook ’71] Karp’s 21 Problems [Karp ’72] A few more [Levin ’73] Garey & Johnson’s 300+ problems [1979]. reduction. Maximum Matching [Edmonds ’65]

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'UGC & SDP, or, Are there any more polynomial time algorithms?' - nichelle


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide2

P

/ BPP

NP-hard

  • 3SAT [Cook ’71]
  • Karp’s 21 Problems [Karp ’72]
  • A few more [Levin ’73]
  • Garey & Johnson’s 300+ problems [1979]

reduction

  • Maximum Matching [Edmonds ’65]
  • multiplication, GCD, determinant [prehistory]
  • Dynamic Programming [Bellman ’53]
  • # Perfect Matchings in planar graphs [Kasteleyn ’67]
  • Primality [Solovay, Strassen ’77]
  • Linear Programming [Khachiyan ’79]
  • Semidefinite Programming [Grötschel, Lovász, Schrijver ’81]
  • (Ellipsoid method)[Nemirovski, Yudin ’76]

Nash equilibria

Factoring

Discrete Log

Lattices: SVP, CVP

Graph Isomorphism

  • Integer Programming, fixed dimension [Lenstra ’83]
  • Markov Chain Monte Carlo [Jerrum, Sinclair / Dyer, Frieze, Kannan ’88]
  • Recognizing minor-closed graph families [Robertson, Seymour ’95]
problems not known in p or np hard
Problems not known in P or NP-hard

Allender, Loui, Regan, Micah Adler, in my 1998Handbook on Algorithms Intro to Complexity Theoryand Theory of Computation: CSC 2041 class:

“If your problem belongs to NP and you cannot prove that it is NP-hard, it may be an “NP-intermediate” problem… However, very few natural problems are currently counted as good candidates for such intermediate status: factoring, discrete logarithm, graph-isomorphism, and several problems relating to lattice bases form a very representative list… The vast majority of natural problems in NP have resolved themselves as being either in Por NP-complete. Unless you uncover a specific connection to one of those four intermediate problems, it is more likely offhand that your problem simply needs more work.”

“But, mostnatural languages in NP have been shown to be either in P or NP-complete. Here are two important exceptions: 1. Graph Isomorphism… 2. Factoring and related problems such as Discrete Log and Primality. (Note: many people believe Primality is in P.)”

problems not known in p or np hard1
Problems not known in P or NP-hard

But…

  • 1000-color a 3-colorable graph
  • Given a graph, say YES if it has a vertex cover using 51% of the vertices, say NO if every vertex cover requires 99% of the vertices.
  • 90%-approximate MAX-CUT
  • 95%-approximate MAX-2SAT
  • Distinguish (1−)-satisfiability and (1/q)/2-satisfiability for MAX-2LIN(mod q)
  • (log log log n) -approximate Sparsest Cut
  • (1−1/2q)-approximate MAX-q-CUT
  • (2k/2k)-approximate MAX-k-AND
  • 1.49-approximate metric TSP
  • 1.54-approximate minimum Steiner tree
  • (.1 log n)-approximate asymm. metric TSP
  • 1.3-approximate minimum multiway cut
  • 1.51-approximate minimum uncapacitated metric facility location
  • (log n)-approximate bandwidth of graphs
  • (log n)1/3-approximating 0-Extension
  • .92-approximating MAX-E3-Set-Splitting
  • 1.5-approximate rectangle tiling
  • O(1)-approximate minimum linear arrangement
  • O(1)-approximate minimum feedback arc
  • .51-approximate maximum betweenness
  • In addition, there are dozens of learning problems, such as…
  • learning poly-sized DNF
slide5
Thesis, part 1:

If we are going to say with a straight face that the theory of NP-completeness is very successful, we had better classify almost all natural open hardness-of-approximation problems.

problems not known in p or np hard2
Problems not known in P or NP-hard

But…

  • 1000-color a 4-colorable graph
  • Given a graph, say YES if it has a vertex cover using 51% of the vertices, say NO if every vertex cover requires 99% of the vertices.
  • 90%-approximate MAX-CUT
  • 95%-approximate MAX-2SAT
  • Distinguish (1−)-satisfiability and (1/q)/2-satisfiability for MAX-2LIN(mod q)
  • (log log log n) -approximate Sparsest Cut
  • (1−1/2q)-approximate MAX-q-CUT
  • (2k/2k)-approximate MAX-k-AND
  • 1.49-approximate metric TSP
  • 1.54-approximate minimum Steiner tree
  • (.1 log n)-approximate asymm. metric TSP
  • 1.3-approximate minimum multiway cut
  • 1.51-approximate minimum uncapacitated metric facility location
  • (log n)-approximate bandwidth of graphs
  • (log n)1/3-approximating 0-Extension
  • .92-approximating MAX-E3-Set-Splitting
  • 1.5-approximate rectangle tiling
  • O(1)-approximate minimum linear arrangement
  • O(1)-approximate minimum feedback arc
  • .51-approximate maximum betweenness

Distinguish (1−)-satisfiability and -satisfiability of Unique-Label-Cover

slide7
Christos Papadimitriou, 2001:

“Together with factoring, [Nash equilibria is] the most important concrete open question on the boundary of P today.”

Thesis, part 2:

Besides factoring, approximability of Unique Label Cover is the most important concrete open questionon the boundary of P today.

slide8
Thesis, part 3:

The UGC situation is win-win.In particular, if UGC is false, we get a fantastic new algorithm no one has thought of before, at least as exciting as Goemans-Williamson’s use of SDP or Arora-Rao-Vazirani.

remainder of the talk
Remainder of the talk

Thesis, part 4:

Something fishy is going on, connecting UGC, SDP algorithms, SDP integrality gaps, boolean/Gaussian Fourier analysis.

remainder of the talk1
Remainder of the talk
  • UGC seems crucial for proving hardness of “2-variable constraint satisfaction problems”.
  • Such problems have natural SDP relaxations.
  • It seems that the sharp SDP integrality gaps occur in Gaussian space.
  • These gaps can usually be translated into analogous “Dictator Tests” (“Long Code Tests”), which are the main ingredient in the standard recipe for UGC-hardness results.
max cut gain
Max-Cut-Gain

Max-Cut:

Given: an undirected graph on N vertices with nonnegative edge-weights summing to 1.

Task: cut the vertices into two parts, maximizing the amount of weight crossing the cut.

Trivial fact: the Max-Cut is always at least ½.

Max-Cut-Gain: Measure your success not by the amount you cut, but by the amount you cut in excess of ½.

twice that, so it’s in [0,1]

alternate formulation
Alternate formulation

Put the edge-weights in a symmetric, nonnegative matrix W.

Max-Cut-Gain:

Let A = − N¢W.

where:

  • h , i is the “inner product” on R1 – just multiplication
  • [N] is treated as a probability space with uniform dist.
  • A is a negated “probability operator” in the space L2([N])
sdp relaxation
SDP relaxation

(where Bd is the unit ball in Rd)

charikar wirth max cut gain alg
Charikar-Wirth Max-Cut-Gain alg.
  • Solve the SDP, getting f : [N] !Bd with
  • Pick g at random from (Rd, ) (d-dim. Gaussian dist.)
  • For each i2 [N], look at hf (i), gi; round into [−1,1] via

1

−t

hf (i), gi

−1

charikar wirth max cut gain alg1
Charikar-Wirth Max-Cut-Gain alg.

When there is an SDP solution f : [N] !Bd with value , CW gives an actual solution f : [N] !B1 with value ( / log(1/)).

To show this rounding procedure is tight is to show an “SDP gap”.

sdp relaxation1
SDP relaxation

where (X, ) is any probability space.

In particular, we will look for an SDP gap in Gaussian space:X = Rd,  = , d-dimensional Gaussian measure.

hermite expansions
Hermite expansions

Every function f : (Rd, ) !Bd has a Hermite expansion

where

  • the Hermite coefficients are in Bd
  • the polynomials {HS}S2Nd are orthogonal
  • H0 = 1, Hei (x) = xi , where ei = (0, 0, …, 1, …, 0, 0).
our operator a
Our operator A

NOT the negative of a probability operator with 0’s on diagonal. However, you can hack around this.

Let P1 denote the linear operator “ projection to level 1”:

Now let A = P1 − (id − P1).

Using orthogonality of the Hermite polynomials, we get:

For an SDP gap, we have to determine the maximum of this for f : (Rd, ) !Bd and for f : (Rd, ) !B1.

f r d b d
f : (Rd, ) !Bd

It’s well-known that almost all probability mass in the d-dimensional Gaussian distribution is concentrated around the spherical shell of radius .

So the function f (x) = x / is basically okay.

This function has , and all its weight is at level 1.

So

f r d b 1
f : (Rd, ) !B1
  • Hermite coefficients are just numbers in R now.
  • Write ( P1 f )(x) = aixi ,
  • On random input x, P1 f is just a 1-d Gaussian with variance 2.
  • Heavy tail property of Gaussians: a Gaussian with variance 2 goes above 2 in abs. value with probability at least exp(−O(22)/2).
  • Since | f (x) | · 1, whenever this happens there is a contribution of at least 1 to
f r d b 11
f : (Rd, ) !B1

Hence

It’s easy to see that the optimum occurs when

In fact, one can show the optimal f is exactly the Charikar-Wirth rounding function!

This proves an upper bound of O( / log(1/)), and gives an optimal  vs. O( / log(1/)) SDP gap in Gaussian space.

ugc hardness
UGC-hardness?

The “standard recipe” for proving UGC-hardness: Instead of an SDP gap, give something sort of similar – a Dictator Test (or “Long Code” Test):

Look at functions f : {−1, 1}d! [−1,1].

Distinguish:

  • “dictator” functions, f (x) = xi , from
  • functions far from every dictator.
f 1 1 d 1 1
f : {−1, 1}d! [−1,1]

Such functions f have an orthogonal “Fourier expansion”,

In particular, W0 = 1, Wei (x) = xi , as before.

We can define A as before, and now the proof of the Dictator Test is virtually identical to the proof of the SDP gap.

Note that each dictator f (x) = xi has all its weight at level 1, as in the first case before,

f 1 1 d 1 11
f : {−1, 1}d! [−1,1]

For general f, ( P1 f )(x) = aixi , where ai measures how close f is to the i-dictator.

If f is far from all dictators, all ai’s are very small in absolute value.

In this case, on random input x2 {−1, 1}d, by the CLT, (P1f )(x) acts very much like a Gaussian; in particular it has the same Heavy Tail property.

ugc hardness of max cut gain
UGC-hardness of Max-Cut-Gain

Conclusion:

Given a weighted graph with maximum cut ½ + , it is NP-hard to find a cut with weight ½ + O( / log(1/)) assuming UGC.

I.e., Charikar-Wirth is tight subject to UGC.

Note: with KKMO’04, this essentially closes all aspects of UGC-hardness of approximating Max-Cut.

recap
Recap
  • We took the Max-Cut-Gain problem (a 2-variable CSP)
  • Set up the SDP relaxation
  • Found a natural, tight SDP gap in Gaussian space
  • Converted it to a Dictator Test to get UGC-hardness.

This recipe also works (but is more difficult) for UGC-hardness of MAX-CUT, Vertex-Cover, …

What’s the deal…?