1 / 42

ECCV `04 Tutorial: Discrete Optimization Methods

ECCV `04 Tutorial: Discrete Optimization Methods. Yuri Boykov (Western Ontario) Phil Torr (Oxford Brookes) Ramin Zabih (Cornell). Outline of Lecture. Motivation: What problems may be solved by Semi Definite Programming SDP : segmentation, matching, classification. What is SDP?

satya
Download Presentation

ECCV `04 Tutorial: Discrete Optimization Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECCV `04 Tutorial:Discrete Optimization Methods Yuri Boykov (Western Ontario) Phil Torr (Oxford Brookes) Ramin Zabih (Cornell)

  2. Outline of Lecture • Motivation: What problems may be solved by Semi Definite Programming SDP: segmentation, matching, classification. • What is SDP? • How can it be implemented.

  3. Recall Min Cut Problem s-t graph cut “source” “sink” T S Goal: divide the graph into two parts separating red and blue nodes A graph with two terminals S and T • Cut cost is a sum of severed edge weights • Minimum cost s-t cut can be found in polynomial time

  4. Segmentation - Model GRAB CUT 5 Hard Degmentation – Probablistic Framework • Input: Image consisting of pixels • Output: Segmentation of pixels • Color model • Coherence or Edge model Output Input

  5. Color Model GRAB CUT 7 Hard Degmentation – Probablistic Framework Assignment variable: Which mixture component does a pixel belong to?

  6. GraphCut for Inference GRAB CUT 21 Hard Degmentation – Probablistic Framework Source Foreground Cut Image Background Sink Cut:A collection of edges which separates the Source from the Sink MinCut:The cut with minimum weight (sum of edge weights) Solution:Global optimum (MinCut) in polynomial time

  7. GraphCut for Infernce GRAB CUT 22 Hard Degmentation – Probablistic Framework Source Foreground Cut Image Background constant Sink MinCut minimizes the energy of the MRF:

  8. MinCut • Edge weights must all be positive, • Then soluble in polynomial time, by max flow algorithm. • Weights are capacities therefore negative capacity does not make sense.

  9. MaxCut • Edge weights may be negative, • Note MaxCut and MinCut are same problem, however term MaxCut used when weights can be negative and positive. • MaxCut NP complete.

  10. Negative weights, MaxCut • Edge weights may be negative, • Problem is NP-hard • With SDP, an approximation ratio of 0.878 can be obtained! (Goemans-Williamson ’95), i.e. within 13% of the global energy minimum.

  11. Why negative weights • In example above the MinCut produces an odd segmentation, negative weights encode the idea of repulsive force that might yield a better segmentation.

  12. Pop out • Maybe need attractive and repulsive forces to get Gestalt effects:

  13. MaxCut Integer Program • Graph • Cut

  14. MaxCut Integer Program • Laplacian (semi positive definite) • Min/Max Cut minimizes this integer program (cf Hopfield network):

  15. Solving via relaxation • Problem above NP complete • Some NP complete problems can be approximated using a “relaxation”, e.g. from binary to continuous variables etc. • Next semi definite programming is explained and then it is shown what relaxation can be used to help solve MaxCut.

  16. Solving via relaxation • Keuchel et al 2002; suggest adding a constraint Where e = (1, … 1) to favour partitions with equal numbers of nodes.

  17. Solving via relaxation • The vector e corresponds to no cut so is an eigenvector of L with eigenvalue 0, a natural relaxation of the problem is to drop the integer constraint and solve for the second largest eigenvector of L (the Fiedler vector, c.f. Shi & Malik) • Only works for positive weights therefore need more thought.

  18. Linear Programming (LP) Canonical form, inequalities removed by introduction of surplus variables.

  19. LP Example

  20. Semidefinite Programming If X and Ai are diagonal this is a linear program, <X,Y> also used for inner produce of two matrices.

  21. Positive Semidefinite Matrices W is Gram matrix, not weight

  22. Semi Definite Programming

  23. Semi Definite Programming • If X diagonal then reduces to LP • The feasible solution space of SDP is convex. • Polynomial time exact solution. • Note most non trivial applications of SDP are equivalent to minimizing the sum of the first few eigenvalues of X with respect to some linear constraints on X.

  24. Recall: • Min/Max Cut minimizes this integer program:

  25. SDP relaxation of MaxCut • The relaxation is to relax the rank one constraint for X; allowing X to be real valued. • Bad news, many more variables.

  26. Binary variables - 1 x x å i j Max w ij 2 Î - + s.t. x { 1 , 1 } i

  27. Graph Vertices: Unit Vectors - × 1 v v å i j Max w ij 2 Î = n s.t. v R , || v || 1 i i

  28. Algorithm • X is a continuous matrix, recovered by SDP. • Recover Gram matrix V, X = VVT by Cholesky decomposition matrix, each vertex represented by a vector vi on unit sphere. • Choose random hyperplane to bisect unit sphere and define cut.

  29. Recall × v v i j = × |v | |v| cos (a) i j Vertices on same side of cut nearby on unit sphere yields large dot product, vertices far apart yield small (negative) dot product.

  30. An SDP Relaxation of MAX CUT – Geometric intuition Embed the vertices of the graph on the unit spheresuch that vertices that are joined by edges are far apart.

  31. Random separation

  32. Algorithm Analysis The probability that two vectors are separated by a random hyperplane: vi vj

  33. Algorithm Analysis • Calculate expected value of cut. • Note value of relaxation exceeds cost of MaxCut (as it lives in a less constrained space). • Note the following identity: - 1 2 Cos ( x ) = 0.8785 6.. ratio min ³ p - 1 x - £ < 1 1 x

  34. Expected Value of Cut

  35. Is the analysis tight? Yes! (Karloff ’96) (Feige-Schechtman ’00)

  36. Bibliography • C. Schellewald, C. Schnörr:Subgraph Matching with Semidefinite Programming Proceedings IWCIA (International Workshop on Combinatorial Image Analysis), Palermo, Italy, May 14.-16.2003,

  37. Bibliography • J. Keuchel, C. Schnörr, C. Schellewald, D. Cremers,Unsupervised Image Partitioning with Semidefinite Programming,Luc Van Gool (Ed.), Pattern Recognition (24th DAGM Symposium, Zurich), Lecture Notes in Computer Science, Vol. 2449, Springer, Berlin, 141-149, 2002.

  38. Bibliography • J. Keuchel, C. Schellewald, D. Cremers, C. Schnörr,Convex Relaxations for Binary Image Partitioning and Perceptual Grouping,B. Radig, S. Florczyk (Eds.), Pattern Recognition (23rd DAGM Symposium, Munich), Lecture Notes in Computer Science, Vol. 2191, Springer, Berlin, 353-360, 2001.

  39. Bibliography • P.H.S. Torr. Solving Markov Random Fields using Semi Definite Programming, In Ninth International Workshop on Artificial Intelligence and Statistics, 2003. http://wwwcms.brookes.ac.uk/~philiptorr/ • And unpublished tech report Gestalt Segmentation using SDP, Microsoft 2002, available on request. http://wwwcms.brookes.ac.uk/~philiptorr/

More Related