1 / 51

Overview of machine learning for computer vision

Overview of machine learning for computer vision. T-11 Computer Vision University of Ioannina Christophoros Nikou. Images and slides from: James Hayes, Brown University, Computer Vision course D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003.

kgladney
Download Presentation

Overview of machine learning for computer vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of machine learning for computer vision T-11 Computer Vision University of Ioannina Christophoros Nikou Images and slides from: James Hayes, Brown University, Computer Vision course D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003.

  2. Machine learning: Overview • Core of ML: Making predictions or decisions from Data. • This overview will not go in to depth about the statistical underpinnings of learning methods. • We will explore some algorithms in depth in later classes

  3. Impact of Machine Learning • Machine Learning is arguably the greatest export from computing to other scientific fields.

  4. Machine Learning Applications Slide: Isabelle Guyon

  5. Image Categorization Training Training Labels Training Images Image Features Classifier Training Classifier parameters

  6. Image Categorization Training Training Labels Training Images Image Features Classifier Training Classifier parameters Testing Prediction Image Features Trained Classifier Outdoor Test Image

  7. Example: Scene Categorization • Is this a kitchen?

  8. Image features Training Training Labels Training Images Trained Classifier Image Features Classifier Training

  9. General Principles of Representation • Coverage • Ensure that all relevant info is captured • Concision • Minimize number of features without sacrificing coverage • Directness • Ideal features are independently useful for prediction

  10. Image representations • Templates • Intensity, gradients, etc. • Histograms • Color, texture, SIFT descriptors, etc.

  11. Classifiers Training Training Labels Training Images Image Features Classifier Training Trained Classifier

  12. Learning a classifier Given some set of features with corresponding labels, learn a function to predict the labels from the features x x x x x x x o x o o o o x2 x1

  13. Many classifiers to choose from • SVM • Neural networks • Naïve Bayes • Bayesian network • Logistic regression • Randomized Forests • Boosted Decision Trees • K-nearest neighbor • RBMs • Etc. Which is the best one?

  14. One way to think about it… • Training labels dictate that two examples are the same or different, in some sense • Features and distance measures define visual similarity • Classifiers try to learn weights or parameters for features and distance measures so that visual similarity predicts label similarity

  15. Claim: The decision to use machine learning is more important than the choice of a particular learning method. If you hear somebody talking of a specific learning mechanism, be wary (e.g. YouTube comment "Oooh, we could plug this into a Neural networkand blah blah blah“)

  16. Dimensionality Reduction • PCA, ICA, LLE, Isomap • PCA is the most important technique to know.  It takes advantage of correlations in data dimensions to produce the best possible lower dimensional representation, according to reconstruction error. • PCA should be used for dimensionality reduction, not for discovering patterns or making predictions. Don't try to assign semantic meaning to the bases.

  17. http://fakeisthenewreal.org/reform/

  18. http://fakeisthenewreal.org/reform/

  19. Clustering example: image segmentation Goal: Break up the image into meaningful or perceptually similar regions

  20. Segmentation for feature support 50x50 Patch 50x50 Patch Slide: Derek Hoiem

  21. Segmentation for efficiency [Felzenszwalb and Huttenlocher 2004] [Shi and Malik 2001] Slide: Derek Hoiem [Hoiem et al. 2005, Mori 2005]

  22. Segmentation as a result Rother et al. 2004

  23. Types of segmentations Oversegmentation Undersegmentation Multiple Segmentations

  24. Major processes for segmentation • Bottom-up: group tokens with similar features • Top-down: group tokens that likely belong to the same object [Levin and Weiss 2006]

  25. Application: Interactive segmentation • Goals • User cuts an object to paste into another image • User forms a matte to cope with e.g. hair • weights between 0 and 1 to mix pixels with background • Interactions • mark some foreground, background pixels with strokes • put a box around foreground • Technical problem • allocate pixels to foreground/background class • consistent with interaction information • segments are internally coherent and different from one another

  26. Application: Interactive segmentation

  27. Application: Interactive segmentation

  28. Application: Interactive segmentation

  29. Application: Forming image regions • Pixels are too small and too detailed a representation for • recognition • establishing correspondences in two images • … • Superpixels • Small groups of pixels that are • clumpy, coarse • similar • a reasonable representation of the underlying pixels

  30. Application: Forming image regions

  31. Application: Forming image regions Superpixels

  32. Clustering Clustering: group together similar points and represent them with a single token Key Challenges: 1) What makes two points/images/patches similar? 2) How do we compute an overall grouping from pairwise similarities? Slide: Derek Hoiem

  33. Why do we cluster? • Summarizing data • Look at large amounts of data • Patch-based compression or denoising • Represent a large continuous vector with the cluster number • Counting • Histograms of texture, color, SIFT vectors • Segmentation • Separate the image into different regions • Prediction • Images in the same cluster may have the same labels Slide: Derek Hoiem

  34. How do we cluster? • K-means • Iteratively re-assign points to the nearest cluster center • Agglomerative clustering • Start with each point as its own cluster and iteratively merge the closest clusters • Divisive clustering • Start with the whole set of points as one cluster and iteratively generate more clusters • Mean-shift clustering • Estimate modes of pdf • Spectral clustering • Split the nodes in a graph based on assigned links with similarity weights

  35. Clustering for Summarization Goal: cluster to minimize intra-class variance (dispersion) • Preserve information Cluster center Data Whether xj is assigned to ci Slide: Derek Hoiem

  36. K-means algorithm 1. Randomly select K centers 2. Assign each point to nearest center 3. Compute new center (mean) for each cluster Illustration: http://en.wikipedia.org/wiki/K-means_clustering

  37. K-means algorithm 1. Randomly select K centers 2. Assign each point to nearest center Back to 2 3. Compute new center (mean) for each cluster Illustration: http://en.wikipedia.org/wiki/K-means_clustering

  38. K-means • Initialize cluster centers: c0 ; t=0 • Assign each point to the closest center • Update cluster centers as the mean of the points • Repeat 2-3 until no points are re-assigned (t=t+1) Slide: Derek Hoiem

  39. K-means converges to a local minimum

  40. K-means: design choices • Initialization • Randomly select K points as initial cluster center • Or greedily choose K points to minimize residual • Distance measures • Traditionally Euclidean, could be others • Optimization • Will converge to a local minimum • May want to perform multiple restarts

  41. K-means clustering using intensity or color Image Clusters on intensity Clusters on color

  42. How to choose the number of clusters? • Minimum Description Length (MDL) principal for model comparison • Minimize Schwarz Criterion • also called Bayes Information Criteria (BIC)

  43. How to evaluate clusters? • Generative • How well are points reconstructed from the clusters? • Discriminative • How well do the clusters correspond to labels? • Purity • Note: unsupervised clustering does not aim to be discriminative Slide: Derek Hoiem

  44. How to choose the number of clusters? • Validation set • Try different numbers of clusters and look at performance • When building dictionaries (discussed later), more clusters typically work better Slide: Derek Hoiem

  45. K-means demos • Information and visualization http://informationandvisualization.de/blog/kmeans-and-voronoi-tesselation-built-processing/demo • Alessandro Giusti home – Clustering http://www.leet.it/home/lale/joomla/component/option,com_wrapper/Itemid,50/ • University of Leicester http://www.math.le.ac.uk/people/ag153/homepage/KmeansKmedoids/Kmeans_Kmedoids.html

  46. K-Means for Segmentation • Pros • Simple and fast method. • Converges to a local minimum. • Cons • Memory-intensive. • Need to pick K. • Sensitive to initialization. • Sensitive to outliers. • Finds “spherical” clusters.

  47. Building Visual Dictionaries • Sample patches from a database • E.g., 128 dimensional SIFT vectors • Cluster the patches • Cluster centers are the dictionary • Assign a codeword (number) to each new patch, according to the nearest cluster

More Related