1 / 72

By: Yair Weiss and Edward H. Adelson. Presenting: Ady Ecker and Max Chvalevsky.

EM for Motion Segmentation. By: Yair Weiss and Edward H. Adelson. Presenting: Ady Ecker and Max Chvalevsky. .

dash
Download Presentation

By: Yair Weiss and Edward H. Adelson. Presenting: Ady Ecker and Max Chvalevsky.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EM for Motion Segmentation By: Yair Weiss and Edward H. Adelson.Presenting: Ady Ecker and Max Chvalevsky. “Perceptually organized EM: A framework that combines information about form and motion”“A unified mixture framework for motion segmentation: incorporating spatial coherence and estimating the number of models”

  2. Contents • Motion segmentation. • Expectation Maximization. • EM for motion segmentation. • EM modifications for motion segmentation. • Summery.

  3. Part 1: MotionSegmentation

  4. Motion segmentation problem v • Input: 1. Sequence of images.2. Flow vector field – output of standard algorithm. • Problem: Find a small number of moving objects in the sequence of images. vy vx

  5. flow data Segmentation Output • Classification of each pixel in each image to its object. • Full velocity field. velocity field

  6. Segmentation goal

  7. Motion vs. static segmentation • Combination of motion and spatial data.Object can contain parts with different static parameters (several colors). • Object representation in an image can benon-continuous when: • There are occlusions. • Only parts of the object are captured...

  8. Difficulties • Motion estimation. • Integration versus segmentation dilemma. • Smoothing inside the model while keeping models independent.

  9. Motion estimation - review • Estimation cannot be done from local measurements only. We have to integrate them.

  10. Motion integration • In reality we will not have clear distinction between corners and lines.

  11. Integration without segmentation • When there are several motions, we might get false intersection points of velocity constraints at T-junctions.

  12. Integration without segmentation • False corners (T-junctions) introduce false dominant directions (upwards).

  13. Contour ownership • Most pixels inside the object don’t supply movement information. They move with the whole object.

  14. Smoothing • We would like to smooth information inside objects, not between objects.

  15. Smoothness in layers

  16. Human segmentation • Humans perform segmentation effortlessly. • Segmentation may be illusive. • Tendency to prefer (and tradeoff): • Small number of models. • Slow and smooth motion. • The segmentation depends on factors such as contrast and speed, that effect our confidence in possible motions.

  17. Segmentation illusion – The split herringbone

  18. Segmentation Illusion - plaids

  19. Part 2: ExpectationMaximization

  20. Clustering

  21. Clustering Problems • Structure: • Vectors in high-dimension space belong to (disjoint) groups (clusters, classes, populations). • Given a vector, find its group (label). • Examples: • Medical diagnosis. • Vector Quantization. • Motion Segmentation.

  22. Clustering by distance to known centers

  23. Finding the centers from known clustering

  24. Start with random model parameters Maximization step: Find the center (mean)of each class Expectation step: Classify each vectorto the closest center EM: Unknown clusters and centers

  25. Illustration

  26. EM Characteristics • Simple to program. • Separates the iterative stage to two independent simple stages. • Convergence is guaranteed, to some local minimum. • Speed and quality depend on: • Number of clusters. • Geometric Shape of the real clusters. • Initial clustering.

  27. Soft EM Each point is given a probability (weight) to belong to each class. • The E step:The probabilities of each point are updated according to the distances to the centers. • The M step:Class centers are computed as a weighted average over all data points.

  28. Soft EM (cont.) • Final E step:classify each point to the nearest (most probable) center. • As a result: • Points near a center of a cluster have high influence on the location of the center. • Points near clusters boundaries have small influence on several centers. • Convergence to local minima is avoided aseach point can softly change its group.

  29. Perceptual Organization • Neighboring or similar pointsare likely to be of the same class. • Account for this in the computation of weights by prior probabilities.

  30. Example: Fitting 2 lines to data points (xi,yi) • Input: • Data points that where generated by 2 lines with Gaussian noise. • Output: • The parameters ofthe 2 lines. • The assignment of each point to its line. ri y=a1x+b1+sv y=a2x+b2+sv v~N(0,1)

  31. The E Step • Compute residuals assuming known lines: • Compute soft assignments:

  32. Least-Squares review • In case of single line and normal i.i.d. errors, maximum likelihood estimation reduces to least-squares: • The line parameters (a,b) are solutions to the system:

  33. The M Step • In the weighted case we find • Weighted least squares system is solved twice for (a1,b1) and (a2,b2).

  34. Illustrations

  35. Illustration

  36. Estimating the number of models • In weighted scenario, additional models will not necessarily reduce the total error. • The optimal number of models is a function of the s parameter – how well we expect the model to fit the data. • Algorithm: start with many models. redundant models will collapse.

  37. Illustration l=log(likelihood)

  38. Part 3: EM for MotionSegmentation

  39. Segmentation of image motion: Input Products of image sequence: • Local flow – output of standard algorithm. • Pixel intensities and color. • Pixel coordinates. • Static segmentation: • Based on the same local data. • Problematic as explained before.

  40. Segmentation output • segmentation • Models: • ‘blue’ model ‘red’ model

  41. Notations • r - pixel. • Or - flow vector at pixel r. • k - model id. • qk - parameters of model k. • vk(r) - velocity predicted by model k at location r. • Dk(r) = D(r, qk) - distance measure. • s - expected noise variance. • gk(r) - probability that pixel ‘r’ is a member of model ‘k’.

  42. Segmentation output • Segmented O: • Model parameters: • bluered r Vblue(r) Vred(r) O(r)

  43. The E Step • Purpose: determine statistic classification of every pixel to models. • pk(r) - prior probability granted to model ‘k’. • For classical EM, pk(r) are equal for all ‘k’.

  44. The E Step (cont) • Alternative representation: Soft decision enables slow convergence to better minimum instead of finding local minima.

  45. Distance measure functionality • Correct physical interpretation of motion data. • If possible – enable analytic solution.

  46. Distance measures (1) • Optic flow constraint: • a – window centered at ‘r’. • vk(r) – velocity of ‘k’ at location ‘r’. • Quadratic. Provides closed MLE solution for the M-step.

  47. Distance measures (2) • Deviation from constant intensity: • a – window centered at ‘r’. • Good for high speed motion. • Resolved by successive linearizations.

  48. The M step • Purpose: layer optimization(according the soft classification of pixels). • Produces weighted ‘average’ of the model. • ‘Average’ depends on definition of D. • Constrained by J (slow & smooth motion).

  49. J (cost) definition • For loosely constrained q(typical for image segmentation): • For highly constrained q:(#degrees of freedom < #owned pixels). • l® 0

  50. Start with random model parameters Maximization step: Find the center (mean)of each class Estimation step: Classify each vectorto the closest center EM: Unknown clusters and centers

More Related