1 / 75

Graph-Cut / Normalized Cut segmentation

Graph-Cut / Normalized Cut segmentation. Jad silbak -University of Haifa . What we have, and what we want:. Most segmentations until now focusing on local features (K-Means). We would like to extract the global impression of an image .

edita
Download Presentation

Graph-Cut / Normalized Cut segmentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Graph-Cut / Normalized Cut segmentation Jadsilbak -University of Haifa

  2. What we have, and what we want: • Most segmentations until now focusing on local features (K-Means). • We would like to extract the global impression of an image . • Since there are many possible partitions ofthe domain, how do we pick the right one?

  3. Definitions and reminders • An undirected graph is marked as where: • V - is a set one for each data element (e.g., pixel). • E - is a set of edges. • Given or (the edges between u &v), we define as the affinity between connected nodes.

  4. Definitions and reminders • What are Graph-Cuts ? • Set of edges whose removal makes a graph disconnected. • It’s the partition of V into A,B such that and • What is a cut’s cost ? • The red edges are a graph-Cut ,the cut’s cost is the sum of all the values of the edges . • Sum(red)=4 = cut’s cost .

  5. Min-cut • A graph can be partitioned into two disjoint sets ,we define the partition cost as: • The bipartition of the graph which minimizes the cut value is called the Min-Cut .

  6. Why not use regular min-cut for the partition? • Lets see an example :

  7. Why not use regular min-cut for the partition? • Everynode is connected to all the other nodes. • edge weights are inversely proportional to the distance between the two nodes.

  8. Why not use regular min-cut for the partition? • minimum cut criteria favors cutting small sets of isolated nodes in the graph .

  9. The ideal cut is not the cut with the min weight • The red cut is the cut we want , but the min-cut favors small partition. • Weight of cut is directly proportional to the number of edges in the cut.! * Slide from Khurram Hassan-Shafique CAP5415 Slide credit: B. Freeman and A. Torralba Computer Vision 2003

  10. The solution - normalized cut (Ncut) • Instead of looking at the value of total edge weight connecting the two partitions, • we compute the cut cost as a fraction of the total edge connections to all the nodes in the graph. We call this disassociation(התנערות) measure the normalized cut (Ncut).

  11. Normalized cut (Ncut) • Cut(A,B) is sum of weights with one end in A and one end in B ,we want to minimize the cut cost. • Assoc(A,V) is sum of all edges with one end in A , we want to maximize the sum of all weights for every A,B element in the partition

  12. Normalized cut (Ncut)

  13. Nassoc • In the same spirit, we can define a measure for total normalized association

  14. Ncut and Nassoc • Using some mathematical manipulations we get that: • Minimizing the disassociation between the groups and maximizing the association within the groups, are in fact identical and can be satisfied simultaneously.

  15. Example :cut and Ncut • What is the min cut , and what is the min Ncut ?

  16. Example :cut and Ncut • It easy to see the min-cut ,we have efficient algorithms for solving the min-cut problem.

  17. Example :cut and Ncut • What we get is a larger cut , and a smaller Ncut . • is this the min-Ncut ??

  18. Ncut complexity: • Unfortunately, minimizing normalized cut exactly is NP complete, even for the special case of graphs on grids. • However, we will show that, when we put the normalized cut problem in the real value domain, an approximate discrete solution can be found efficiently.

  19. Adjacency matrix • Let W be the adjacency matrix of the graph, where every is the similarity between i,j where . e a 2 1 b 2 1 c 2 d

  20. Diagonal matrix • Let D be the diagonal matrix with diagonal entries • ( D(i)=the sum of all edge with one end in ) e a 2 1 b 2 1 c 2 d

  21. Laplacian matrix : • We define the thelaplacian matrix L : a e 2 1 b 2 1 c 2 d

  22. laplacian matrix properties: • The laplacian matrix properties : • All eigenvectors of L are perpendicular to each other (has a complete set of orthonormaleigen−vectors) . • L is symmetric positive semi-definite . • Has a N non-negative real-valued eigen-values ,the smallest eigen-values is always 0 with eigen- vector . Vasileios Zografos zografos@isy.liu.se Klas Nordberg klas@isy.liu.se

  23. Indicator vector • let x be an dimensional indicator vector, if node , and otherwise. A B e a 2 1 b 2 c 1 2 d

  24. Normalized cut • Then the min normalized cut cost can be written as: • Where y is an indicator vector that acts like x with one exception if then .

  25. Y ,b definition and example: • We define b as . A B a e 2 1 b 2 c 1 2 d

  26. Y -constraints • we have two constraints on y: 1) • This can be seen as a constrain that forces all y indicator vectors to be perpendicular to each other and specifically the vector . • But we already know that the laplacian matrix has: • eigen vector of . • All eigen vectors are perpendicular to each other ,thus This constraints is automatically satisfied by the solution.

  27. Y -constraints 2)Y must be a discrete value one of two {1,-b} . • Satisfying this constraint is what makes the problem ‘np’. • If we relax this constraint to accept real values we are able to approximate the solution by solving the generalizedeigenvalue system .

  28. Properties of the eigenvalue system • The eigen vector will hopefully have similar values for nodes with high similarity( w(i,j) is high) . • The smallest eigenvector is always 0 , because we can have a partition of A=V and B={} thus Ncut(A,B)=0 . • Second smallest eigenvector is the real-valuedy thatminimizes ncut and is the solution for

  29. What is Clustering? • Cluster: a collection of data objects • Similar to one another within the same cluster. • Dissimilar to the objects in other clusters. • The definition of clusters correspond to the definition of ncut. A-cluster B-cluster a e 2 1 b 2 c 1 2 d Slide credit: Mario Haddad -Thanks

  30. Spectral clustering • In order to find clusters that may be hard to find in the real domain (coordinates),such as non-convex data . • We use eigenvectors of matrices derived from the data to map the data to a low-dimensional space where the data is separated and can be easily clustered(we can use [K-Means] in the new space ). • Can be treated as graph partitioning problem without making specific assumptions on the form of the clusters (Ncut- is an example for spectral clustering).

  31. Spectral clusteringexample: non-convex convex Vasileios Zografos zografos@isy.liu.se Klas Nordberg klas@isy.liu.se

  32. Why does this work? • Ideal Case • Why do the eigenvectors of the laplacian include cluster identification information? https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CEQQFjAC&url=http%3A%2F%2Feniac.cs.qc.cuny.edu%2Fandrew%2Fgcml%2FLecture21.pptx&ei=RSM4U-qwKZDdsgbUh4DQDw&usg=AFQjCNElVvY0Mekord2Byc5qvb-8SlEuOg&sig2=IB53ooeEBUgrgPQRQ4nPew

  33. Spectral Clustering - Intuition Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg

  34. Spectral Clustering - Intuition Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg

  35. Spectral Clustering - Intuition Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg

  36. Why does this work? • How does this eigenvector decomposition address this? cluster assignment • https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CEQQFjAC&url=http%3A%2F%2Feniac.cs.qc.cuny.edu%2Fandrew%2Fgcml%2FLecture21.pptx&ei=RSM4U-qwKZDdsgbUh4DQDw&usg=AFQjCNElVvY0Mekord2Byc5qvb-8SlEuOg&sig2=IB53ooeEBUgrgPQRQ4nPew

  37. Spectral Clustering & Ncut • Ncut is type of spectral clustering, in witch we use the eigenvector as an indicator for the partition of the graph (data). • The eigenvalues represent the Ncut cost for each eigenvector, minimizing the Ncut means using a eigenvector with a corresponding min eigenvalue. • Since the smallest eigenvalue is “0” with a corresponding eigenvector (partitions the data to one cluster) the first eigenvector is not used . • The second eigenvector minimizes ncut and is the solution for :

  38. Translating the eigenvector to partitions • What allowed us to approximate Ncut solution (which is ‘np’), is allowing the ‘y’ to take real value, but we still need a discrete value for the partition of the graph a e 2 1 b 2 c 1 2 d Not discrete second eigenvector

  39. Translating the eigenvector to partitions • We must choose a threshold so we can return to discrete value: • We can try random thresholds and choose the best. • We can check l evenly spaced possible splitting points and take the best ,(what gives the smallest Ncut). a e 2 Threshold T=‘0’ 1 b 2 c 1 2 d

  40. Some notes: • We have seen how the second eigenvector can be used to bipartition of the graph, but what about the other eigenvector ?? • The 3d smallest eigenvector can give a sub-partition for the partition we got from the 2d smallest eigenvector, and in general every eigenvector can sub-partition the result we got from the perverse eigenvector.

  41. Some notes: • But do to the fact that every time we use an eigenvector to partition the graph, we may get errors from the conversion to discrete values, the error accumulates and the partition becomes less reliable the higher we go. • What do we do to farther partition the graph ?

  42. Recursive Two-Way Ncut • Given compute the weight on each edge, and summarize the information into W and D. • Solve for eigenvectors with the smallest eigenvalues. • We use the eigenvector with the second smallest eigenvalue to bipartition the graph.

  43. Recursive Two-Way Ncut • Decide if the current partition should be subdivided by checking : • The stability(not continues) if the values of the of the eigenvector are continues from one intery to anther then, it may be the we are tiring to sub-partition the data were it is not needed. • (Ncut < T ) T is prespecified value we use to indicate when to stop farther partition this sub graph. • Recursively repartition the segmented parts if necessary.

  44. Recursive Two-Way Ncutexample : Denis Hamad LASL – ULCODenis.Hamad@lasl.univ-littoral.fr

  45. How does all this relate to images? • Let G be a graph that represents the image I as fallows, for every pix in I there exists a node in G that represents it, Wij is the similarty between pix ‘i’ and ‘j’.

  46. Example: Brightness Images • How do we define similarity ? • Where is the spatial location of node i, and is a feature vector based on intensity.

More Related