- 140 Views
- Uploaded on
- Presentation posted in: General

Graph-Cut / Normalized Cut segmentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Graph-Cut / Normalized Cut segmentation

Jadsilbak -University of Haifa

- Most segmentations until now focusing on local features (K-Means).
- We would like to extract the global impression of an image .
- Since there are many possible partitions ofthe domain, how do we pick the right one?

- An undirected graph is marked as where:
- V - is a set one for each data element (e.g., pixel).
- E - is a set of edges.
- Given or (the edges between u &v), we define as the affinity between connected nodes.

- What are Graph-Cuts ?
- Set of edges whose removal makes a graph disconnected.
- It’s the partition of V into A,B such that and

- What is a cut’s cost ?
- The red edges are a graph-Cut ,the cut’s cost is the sum of all the values of the edges .
- Sum(red)=4 = cut’s cost .

- A graph can be partitioned into two disjoint sets ,we define the partition cost as:
- The bipartition of the graph which minimizes the cut value is called the Min-Cut .

- Lets see an example :

- Everynode is connected to all the other nodes.
- edge weights are inversely proportional to the distance between the two nodes.

- minimum cut criteria favors cutting small sets of isolated nodes in the graph .

- The red cut is the cut we want , but the min-cut favors small partition.
- Weight of cut is directly proportional to the number of edges in the cut.!

* Slide from Khurram Hassan-Shafique CAP5415 Slide credit: B. Freeman and A. Torralba Computer Vision 2003

- Instead of looking at the value of total edge weight connecting the two partitions,
- we compute the cut cost as a fraction of the total edge connections to all the nodes in the graph. We call this disassociation(התנערות) measure the normalized cut (Ncut).

- Cut(A,B) is sum of weights with one end in A and one end in B ,we want to minimize the cut cost.
- Assoc(A,V) is sum of all edges with one end in A , we want to maximize the sum of all weights for every A,B element in the partition

- In the same spirit, we can define a measure for total normalized association

- Using some mathematical manipulations we get that:
- Minimizing the disassociation between the groups and maximizing the association within the groups, are in fact identical and can be satisfied simultaneously.

- What is the min cut , and what is the min Ncut ?

- It easy to see the min-cut ,we have efficient algorithms for solving the min-cut problem.

- What we get is a larger cut , and a smaller Ncut .
- is this the min-Ncut ??

- Unfortunately, minimizing normalized cut exactly is NP complete, even for the special case of graphs on grids.
- However, we will show that, when we put the normalized cut problem in the real value domain, an approximate discrete solution can be found efficiently.

- Let W be the adjacency matrix of the graph, where every is the similarity between i,j where .

e

a

2

1

b

2

1

c

2

d

- Let D be the diagonal matrix with diagonal entries
- ( D(i)=the sum of all edge with one end in )

e

a

2

1

b

2

1

c

2

d

- We define the thelaplacian matrix L :

a

e

2

1

b

2

1

c

2

d

- The laplacian matrix properties :
- All eigenvectors of L are perpendicular to each other (has a complete set of orthonormaleigen−vectors) .
- L is symmetric positive semi-definite .
- Has a N non-negative real-valued eigen-values ,the smallest eigen-values is always 0 with eigen- vector .

Vasileios Zografos [email protected] Klas Nordberg [email protected]

- let x be an dimensional indicator vector, if node , and otherwise.

A

B

e

a

2

1

b

2

c

1

2

d

- Then the min normalized cut cost can be written as:
- Where y is an indicator vector that acts like x with one exception if then .

- We define b as .

A

B

a

e

2

1

b

2

c

1

2

d

- we have two constraints on y:
1)

- This can be seen as a constrain that forces all y indicator vectors to be perpendicular to each other and specifically the vector .
- But we already know that the laplacian matrix has:
- eigen vector of .
- All eigen vectors are perpendicular to each other ,thus This constraints is automatically satisfied by the solution.

2)Y must be a discrete value one of two {1,-b} .

- Satisfying this constraint is what makes the problem ‘np’.
- If we relax this constraint to accept real values we are able to approximate the solution by solving the generalizedeigenvalue system .

- The eigen vector will hopefully have similar values for nodes with high similarity( w(i,j) is high) .
- The smallest eigenvector is always 0 , because we can have a partition of A=V and B={} thus Ncut(A,B)=0 .
- Second smallest eigenvector is the real-valuedy thatminimizes ncut and is the solution for

- Cluster: a collection of data objects
- Similar to one another within the same cluster.
- Dissimilar to the objects in other clusters.
- The definition of clusters correspond to the definition of ncut.

A-cluster

B-cluster

a

e

2

1

b

2

c

1

2

d

Slide credit: Mario Haddad -Thanks

- In order to find clusters that may be hard to find in the real domain (coordinates),such as non-convex data .
- We use eigenvectors of matrices derived from the data to map the data to a low-dimensional space where the data is separated and can be easily clustered(we can use [K-Means] in the new space ).
- Can be treated as graph partitioning problem without making specific assumptions on the form of the clusters (Ncut- is an example for spectral clustering).

non-convex

convex

Vasileios Zografos [email protected] Klas Nordberg [email protected]

- Ideal Case
- Why do the eigenvectors of the laplacian include cluster identification information?

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CEQQFjAC&url=http%3A%2F%2Feniac.cs.qc.cuny.edu%2Fandrew%2Fgcml%2FLecture21.pptx&ei=RSM4U-qwKZDdsgbUh4DQDw&usg=AFQjCNElVvY0Mekord2Byc5qvb-8SlEuOg&sig2=IB53ooeEBUgrgPQRQ4nPew

Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg

Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg

Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg

- How does this eigenvector decomposition address this?

cluster assignment

- https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CEQQFjAC&url=http%3A%2F%2Feniac.cs.qc.cuny.edu%2Fandrew%2Fgcml%2FLecture21.pptx&ei=RSM4U-qwKZDdsgbUh4DQDw&usg=AFQjCNElVvY0Mekord2Byc5qvb-8SlEuOg&sig2=IB53ooeEBUgrgPQRQ4nPew

- Ncut is type of spectral clustering, in witch we use the eigenvector as an indicator for the partition of the graph (data).
- The eigenvalues represent the Ncut cost for each eigenvector, minimizing the Ncut means using a eigenvector with a corresponding min eigenvalue.
- Since the smallest eigenvalue is “0” with a corresponding eigenvector (partitions the data to one cluster) the first eigenvector is not used .
- The second eigenvector minimizes ncut and is the solution for :

- What allowed us to approximate Ncut solution (which is ‘np’), is allowing the ‘y’ to take real value, but we still need a discrete value for the partition of the graph

a

e

2

1

b

2

c

1

2

d

Not discrete

second eigenvector

- We must choose a threshold so we can return to discrete value:
- We can try random thresholds and choose the best.
- We can check l evenly spaced possible splitting points and take the best ,(what gives the smallest Ncut).

a

e

2

Threshold T=‘0’

1

b

2

c

1

2

d

- We have seen how the second eigenvector can be used to bipartition of the graph, but what about the other eigenvector ??
- The 3d smallest eigenvector can give a sub-partition for the partition we got from the 2d smallest eigenvector, and in general every eigenvector can sub-partition the result we got from the perverse eigenvector.

- But do to the fact that every time we use an eigenvector to partition the graph, we may get errors from the conversion to discrete values, the error accumulates and the partition becomes less reliable the higher we go.
- What do we do to farther partition the graph ?

- Given compute the weight on each edge, and summarize the information into W and D.
- Solve for eigenvectors with the smallest eigenvalues.
- We use the eigenvector with the second smallest eigenvalue to bipartition the graph.

- Decide if the current partition should be subdivided by checking :
- The stability(not continues) if the values of the of the eigenvector are continues from one intery to anther then, it may be the we are tiring to sub-partition the data were it is not needed.
- (Ncut < T ) T is prespecified value we use to indicate when to stop farther partition this sub graph.

- Recursively repartition the segmented parts if necessary.

Denis Hamad LASL – [email protected]

- Let G be a graph that represents the image I as fallows, for every pix in I there exists a node in G that represents it, Wij is the similarty between pix ‘i’ and ‘j’.

- How do we define similarity ?
- Where is the spatial location of node i, and is a feature vector based on intensity.

- We can change the parameters similar to bilateral filtering.

- where is the spatial location of node i, and is a feature vector based on:
- intensity
- color
- texture

- In intensity the feature victor was of dimension ‘1’ but we can have a d-dim feature victor where the weight are the similarity between these vectors .

- We have seen feature vector based on intensity.
- What about feature vector based on colors
- Where h, s, v are the HSV values, for color segmentation

- Feature vector based on texture segmentation:
- We can use spatial filter for where the DOOG filters at various scales and orientations .

- Solving a standard eigenvalue problem for all eigenvectors takes , this is impractical !!!
- Fortunately our graph partitioning has the following properties :
- The graphs are often only locally connected.
- only the top few eigenvectors are needed for graph partitioning.
- the precision requirement for the eigenvectors is low.

- We can remove up to 90% of the total connections with each of the neighborhoods without affecting the eigenvector solution to the system.
- Putting everything together, each of the matrix vector computations cost .
- Recursive Two-Way Ncut may take

- We have seen why min-cut wasn’t reliable info for segmentation and why we need Ncut.
- Although the Ncut problem is np we can approximate a solution using the generalized eigenvalue system.
- How does Ncut graph relate to images and how do we define similarity, (intensity ,color and texture).
- Recursive Two-Way Ncut algorithm .
- Time complexity.

- Automatic segmentation seems to never be perfect !
- What we want is the ability to mark (impose) hard constraints ,by indicating certain pixels (seeds) that absolutely have to be part of object and certain pixels that absolutely have to be part of background .
- red is the object
- Blue is the background

- Boundary based methods
- - based on local information (Derivatives kernels, Harris, Canny).

- Region based methods
- + statistics inside the region.
- - often generates irregular boundaries and small Holes.

Daniel Heilper, CS Department, Haifa University

:Boundary

Region:

Daniel Heilper, CS Department, Haifa University

- The cost function provides a soft constraint for segmentation and includes both region and boundary properties .
- Let be a binary vector whose components can be either “obj” or “bkg” ,p the set of nods.

- Region Boundary
- specifies a relative importance of the region Boundary properties term.

- The can be seen as the individual penalties for assigning pixel p to “object” and “background” .
- For example may reflect on how the intensity of pixel p fits into a known intensity model (e.g. histogram) of the object and background

- comprises the “boundary” properties of segmentation A , Coefficient interpreted as a penalty for a discontinuity between p and q.
- is large when pixels p and q are similar .
- Costs may be based on local intensity gradient, Laplacian zero-crossing.

- ->

Figure 2. Synthetic Gestalt example. The segmentation

results in (b-d)

are shown for various

Levels of relative importance of “region”

versus “boundary” in (1). Note that the result

in (b) corresponds to a wide range of .

The general work :

we create a graph with two terminals.

The edge weights reflect the parameters in the regional and the boundary terms of the cost function,

as well as the known positions of seeds in the image.

The seeds are O = {v} and B = {p}

Proceedings of “Internation Conference on Computer Vision”, Vancouver, Canada, July 2001

- J. Shi and J.Malik, “Normalized Cuts and Image Segmentation,” Proc. CVPR 1997.also IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 888-905, August 2000.
- Boykov, Y., Jolly, M., " Interactive graph cuts for optimal boundary and regionsegmentation of objects in N-D images." In: International Conference on Computer Vision, Vancouver , BC. (2001) 105–112.
- PAMI 2000! Slide credit: S. Lazebnik.
- Interactive Image Segmentation FahimMannan (260 266 294) .
- Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg.
- Daniel Heilper, CS Department, Haifa University.
- https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CEQQFjAC&url=http%3A%2F%2Feniac.cs.qc.cuny.edu%2Fandrew%2Fgcml%2FLecture21.pptx&ei=RSM4U-qwKZDdsgbUh4DQDw&usg=AFQjCNElVvY0Mekord2Byc5qvb-8SlEuOg&sig2=IB53ooeEBUgrgPQRQ4nPew