1 / 53

EM and model selection

EM and model selection. Missing variable problems. In many vision problems, if some variables were known the maximum likelihood inference problem would be easy fitting; if we knew which line each token came from, it would be easy to determine line parameters

armani
Download Presentation

EM and model selection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EM and model selection

  2. Missing variable problems • In many vision problems, if some variables were known the maximum likelihood inference problem would be easy • fitting; if we knew which line each token came from, it would be easy to determine line parameters • segmentation; if we knew the segment each pixel came from, it would be easy to determine the segment parameters • fundamental matrix estimation; if we knew which feature corresponded to which, it would be easy to determine the fundamental matrix • etc. • This sort of thing happens in statistics, too University of Missouri at Columbia

  3. Missing variable problems • Strategy • estimate appropriate values for the missing variables • plug these in, now estimate parameters • re-estimate appropriate values for missing variables, continue • eg • guess which line gets which point • now fit the lines • now reallocate points to lines, using our knowledge of the lines • now refit, etc. • We’ve seen this line of thought before (k means) University of Missouri at Columbia

  4. We have a problem with parameters, missing variables This suggests: Iterate until convergence replace missing variable with expected values, given fixed values of parameters fix missing variables, choose parameters to maximize likelihood given fixed values of missing variables e.g., iterate till convergence allocate each point to a line with a weight, which is the probability of the point given the line refit lines to the weighted set of points Converges to local extremum Somewhat more general form is available Missing variables - strategy University of Missouri at Columbia

  5. Choose a fixed number of clusters Choose cluster centers and point-cluster allocations to minimize error can’t do this by search, because there are too many possible allocations. Algorithm fix cluster centers; allocate points to closest cluster fix allocation; compute best cluster centers x could be any set of features for which we can compute a distance (careful about scaling) K-Means University of Missouri at Columbia * From Marc Pollefeys COMP 256 2003

  6. K-Means University of Missouri at Columbia

  7. K-Means University of Missouri at Columbia * From Marc Pollefeys COMP 256 2003

  8. Image Segmentation by K-Means • Select a value of K • Select a feature vector for every pixel (color, texture, position, or combination of these etc.) • Define a similarity measure between feature vectors (Usually Euclidean Distance). • Apply K-Means Algorithm. • Apply Connected Components Algorithm. • Merge any components of size less than some threshold to an adjacent component that is most similar to it. University of Missouri at Columbia * From Marc Pollefeys COMP 256 2003

  9. K-Means • Is an approximation to EM • Model (hypothesis space): Mixture of N Gaussians • Latent variables: Correspondence of data and Gaussians • We notice: • Given the mixture model, it’s easy to calculate the correspondence • Given the correspondence it’s easy to estimate the mixture models University of Missouri at Columbia

  10. Generalized K-Means (EM) University of Missouri at Columbia

  11. Generalized K-Means • Converges! • Proof [Neal/Hinton, McLachlan/Krishnan]: • E/M step does not decrease data likelihood • Converges at saddle point University of Missouri at Columbia

  12. Idea • Data generated from mixture of Gaussians • Latent variables: Correspondence between Data Items and Gaussians University of Missouri at Columbia

  13. E-Step M-Step Learning a Gaussian Mixture(with known covariance) University of Missouri at Columbia

  14. The Expectation/Maximization (EM) algorithm 14 • In general we may imagine that an image comprises L segments (labels) • Within segment l the pixels (feature vectors) have a probability distribution represented by • represents the parameters of the data in segment l • Mean and variance of the greylevels • Mean vector and covariance matrix of the colours • Texture parameters University of Missouri at Columbia

  15. The Expectation/Maximization (EM) algorithm 15 University of Missouri at Columbia

  16. The Expectation/Maximization (EM) algorithm 16 • Once again a chicken and egg problem arises • If we knew then we could obtain a labelling for each by simply choosing that label which maximizes • If we knew the label for each we could obtain by using a simple maximum likelihood estimator • The EM algorithm is designed to deal with this type of problem but it frames it slightly differently • It regards segmentation as a missing (or incomplete) data estimation problem University of Missouri at Columbia

  17. The Expectation/Maximization (EM) algorithm • The incomplete data are just the measured pixel greylevels or feature vectors • We can define a probability distribution of the incomplete data as • The complete data are the measured greylevels or feature vectors plus a mapping function which indicates the labelling of each pixel • Given the complete data (pixels plus labels) we can easily work out estimates of the parameters • But from the incomplete data no closed form solution exists University of Missouri at Columbia

  18. The Expectation/Maximization (EM) algorithm Initialize an estimate of Repeat Step 1: (E step) Obtain an estimate of the labels based on the current parameter estimates Step 2: (M step) Update the parameter estimates based on the current labelling Until Convergence 18 Once again we resort to an iterative strategy and hope that we get convergence The algorithm is as follows: University of Missouri at Columbia

  19. The Expectation/Maximization (EM) algorithm 19 • A recent approach to applying EM to image segmentation is to assume the image pixels or feature vectors follow a mixture model • Generally we assume that each component of the mixture model is a Gaussian • A Gaussian mixture model (GMM) University of Missouri at Columbia

  20. The Expectation/Maximization (EM) algorithm 20 Our parameter space for our distribution now includes the mean vectors and covariance matrices for each component in the mixture plus the mixing weights We choose a Gaussian for each component because the ML estimate of each parameter in the E-step becomes linear University of Missouri at Columbia

  21. The Expectation/Maximization (EM) algorithm 21 Define a posterior probability as the probability that pixel j belongs to region l given the value of the feature vector Using Bayes rule we can write the following equation This actually is the E-step of our EM algorithm as allows us to assign probabilities to each label at each pixel University of Missouri at Columbia

  22. The Expectation/Maximization (EM) algorithm 22 The M step simply updates the parameter estimates using MLL estimation University of Missouri at Columbia

  23. Segmentation with EM Figure from “Color and Texture Based Image Segmentation Using EM and Its Application to Content Based Image Retrieval”,S.J. Belongie et al., Proc. Int. Conf. Computer Vision, 1998, c1998, IEEE University of Missouri at Columbia

  24. EM Clustering: Results University of Missouri at Columbia

  25. We have one line, and n points Some come from the line, some from “noise” This is a mixture model: We wish to determine line parameters p(comes from line) Lines and robustness University of Missouri at Columbia

  26. Introduce a set of hidden variables, d, one for each point. They are one when the point is on the line, and zero when off. If these are known, the negative log-likelihood becomes (the line’s parameters are f, c): Here K is a normalising constant, kn is the noise intensity (we’ll choose this later). Estimating the mixture model University of Missouri at Columbia

  27. We shall substitute the expected value of d, for a given q recall q=(f, c, l) E(d_i)=1. P(d_i=1|q)+0.... Notice that if kn is small and positive, then if distance is small, this value is close to 1 and if it is large, close to zero Substituting for delta University of Missouri at Columbia

  28. Obtain some start point Now compute d’s using formula above Now compute maximum likelihood estimate of f, c come from fitting to weighted points l comes by counting Iterate to convergence Algorithm for line fitting University of Missouri at Columbia

  29. University of Missouri at Columbia

  30. The expected values of the deltas at the maximum (notice the one value close to zero). University of Missouri at Columbia

  31. Closeup of the fit University of Missouri at Columbia

  32. Choosing parameters • What about the noise parameter, and the sigma for the line? • several methods • from first principles knowledge of the problem (seldom really possible) • play around with a few examples and choose (usually quite effective, as precise choice doesn’t matter much) • notice that if kn is large, this says that points very seldom come from noise, however far from the line they lie • usually biases the fit, by pushing outliers into the line • rule of thumb; its better to fit to the better fitting points, within reason; if this is hard to do, then the model could be a problem University of Missouri at Columbia

  33. Segmentation a segment is a gaussian that emits feature vectors (which could contain colour; or colour and position; or colour, texture and position). segment parameters are mean and (perhaps) covariance if we knew which segment each point belonged to, estimating these parameters would be easy rest is on same lines as fitting line Fitting multiple lines rather like fitting one line, except there are more hidden variables easiest is to encode as an array of hidden variables, which represent a table with a one where the i’th point comes from the j’th line, zeros otherwise rest is on same lines as above Other examples University of Missouri at Columbia

  34. Issues with EM • Local maxima • can be a serious nuisance in some problems • no guarantee that we have reached the “right” maximum • Starting • k means to cluster the points is often a good idea University of Missouri at Columbia

  35. Local maximum University of Missouri at Columbia

  36. which is an excellent fit to some points University of Missouri at Columbia

  37. and the deltas for this maximum University of Missouri at Columbia

  38. A dataset that is well fitted by four lines University of Missouri at Columbia

  39. Result of EM fitting, with one line (or at least, one available local maximum). University of Missouri at Columbia

  40. Result of EM fitting, with two lines (or at least, one available local maximum). University of Missouri at Columbia

  41. Seven lines can produce a rather logical answer University of Missouri at Columbia

  42. Model image pair (or video sequence) as consisting of regions of parametric motion affine motion is popular Now we need to determine which pixels belong to which region estimate parameters Likelihood assume Straightforward missing variable problem, rest is calculation Motion segmentation with EM University of Missouri at Columbia

  43. Three frames from the MPEG “flower garden” sequence Figure from “Representing Images with layers,”, by J. Wang and E.H. Adelson, IEEE Transactions on Image Processing, 1994, c 1994, IEEE University of Missouri at Columbia

  44. Grey level shows region no. with highest probability Segments and motion fields associated with them Figure from “Representing Images with layers,”, by J. Wang and E.H. Adelson, IEEE Transactions on Image Processing, 1994, c 1994, IEEE University of Missouri at Columbia

  45. If we use multiple frames to estimate the appearance of a segment, we can fill in occlusions; so we can re-render the sequence with some segments removed. Figure from “Representing Images with layers,”, by J. Wang and E.H. Adelson, IEEE Transactions on Image Processing, 1994, c 1994, IEEE University of Missouri at Columbia

  46. Many, but not all problems that can be attacked with EM can also be attacked with RANSAC need to be able to get a parameter estimate with a manageably small number of random choices. RANSAC is usually better Didn’t present in the most general form in the general form, the likelihood may not be a linear function of the missing variables in this case, one takes an expectation of the likelihood, rather than substituting expected values of missing variables Issue doesn’t seem to arise in vision applications. Some generalities University of Missouri at Columbia

  47. We wish to choose a model to fit to data e.g. is it a line or a circle? e.g is this a perspective or orthographic camera? e.g. is there an aeroplane there or is it noise? Issue In general, models with more parameters will fit a dataset better, but are poorer at prediction This means we can’t simply look at the negative log-likelihood (or fitting error) Model Selection University of Missouri at Columbia

  48. Top is not necessarily a better fit than bottom (actually, almost always worse) University of Missouri at Columbia

  49. University of Missouri at Columbia

  50. We can discount the fitting error with some term in the number of parameters in the model. University of Missouri at Columbia

More Related