Loading in 2 Seconds...
Loading in 2 Seconds...
Hierarchical Models of Vision: Machine Learning/Computer Vision. Alan Yuille UCLA: Dept. Statistics Joint App. Computer Science, Psychiatry, Psychology Dept . Brain and Cognitive Engineering, Korea University. Structure of Talk.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Hierarchical Models of Vision: Machine Learning/Computer Vision Alan Yuille UCLA: Dept. Statistics Joint App. Computer Science, Psychiatry, Psychology Dept. Brain and Cognitive Engineering, Korea University
Structure of Talk • Comments on the relations between Cognitive Science and Machine Learning. • Comments about Cog. Sci. ML and Neuroscience. • Three related Hierarchical Machine Learning Models. • (I) Convolutional Networks. • (II) Structured Discriminative Models. • (III) Grammars and Compositional Models. • The examples will be on vision, but the techniques are generally applicable.
Cognitive Science helpsMachine Learning • Cognitive Science is useful to ML because the human visual system has many desirable properties: (not present in most ML systems). • (i) flexible, adaptive, robust • (ii) capable of learning from limited data, ability to transfer, • (iii) able to perform multiple tasks, • (iv) closely coupled to reasoning, language, and other cognitive abilities. • Cognitive Scientists search for fundamental theories and not incremental pragmatic solutions.
Cognitive Science and Machine Learning • Machine Learning is useful to Cog. Sci. because it has experience dealing with complex tasks on huge datasets (e.g., the fundamental problem of vision). • Machine Learning – and Computer Vision --- has developed a very large number of mathematical and computational techniques, which seem necessary to deal with the complexities of the world. • Data drives the modeling tools. Simple data requires only simple tools. But simple tasks also require simple tools. (neglected by CV).
Combining Cognitive and ML • Augmented Reality – we need computer systems that can interact with humans. • How can a visually impaired person best be helped by a ML/CV system? Wants to be able to ask the computer questions– who was that person? – i.e. interact with it as if it was a human. Turing tests for vision (S. Geman and D. Geman). • Image Analyst (Medicine, Military) – wants a ML system that can reason about images, make analogies to other images, and so on.
Data Set Dilemmas • Too complicated a dataset: requires a lot of engineering to perform well (“neural network tricks”, N students testing 100x N parameter settings). • Too simple a dataset: • Results may not generalize to the real world. It may focus on side issues. • Tyranny of Datasets: You can only evaluate performance on a limited set of tasks (e.g., can do “object classification” and not “object segmentation” or “cat part detection”, or ask “what is the cat doing?”)
Datasets and Generalization • Machine Learning methods are tested on large benchmarked datasets. • Two of the applications involve 20,000 and 1,000,000 images. • Critical Issues of Machine Learning: • (I) Learnability: will the results generalize to new datasets? • (II) Inference: can we compute properties fast enough? • Theoretical Results: Probably Approximately Correct (PAC) Theorems.
Vision: The Data and the Problem • Complexity, Variability, and Ambiguity of Images. • Enormous range of visual tasks that can be performed. Set of all images is practically infinite. • 30,000 objects, 1,000 scenes. • How can humans interpret images in 150 Msec? • Fundamental Problem: complexity.
Neuroscience: Bio-Inspired • Theoretical Models of the Visual Cortex (e.g., T. Poggio) are hierarchical and closely related to convolutional nets. • Generative models (later in this talk) may help explain the increasing evidence of top-down mechanisms. • Behavior-to-Brain: propose models for the visual cortex that can be tested by fMRI, multi-electrodes, and related techniques. • (multi-electrodes T.S. Lee, fMRI D.K. Kersten). • Caveat: real neurons don’t behave like neurons in textbooks… • Conjecture: Structure of the Brain and ML systems is driven by the statistical structure of the environment. The Pattern Theory Manifesto.
Hierarchical Models of Vision • Why Hierarchies? • Bio-inspired: Mimics the structure of the human/macaque visual system. • Computer Vision Architectures: low-, middle-, high-level. From ambiguous low-level to unambiguous high level. • Optimal design: for representing, learning, and retrieving image patterns?
Three Types of Hierarchies: • (I) Convolutional Neural Networks: ImageNet Dataset. • Krizhevsky, Sutskever, and Hinton (2013). • LeCun, Salakudinov. • (II) Discriminative Part-Based Models (McAllester, Ramanan, Felzenswalb 2008, L. Zhu et al. 2010). PASCAL dataset. • (III) Generative Models. Grammars and Compositional Models. (Geman, Mumford, SC Zhu, L. Zhu,…).
Example I: Convolutional Nets • Krizhevsky, Sutskever, and Hinton (2013). • Dataset ImageNet (FeiFei Li). • 1,000,000 images. • 1,000 objects. • Task: detect and localize objects.
Example I: Neural Network • Architecture: Neural Network. • Convolutional: each hidden unit applies the same localized linear filter to the input.
Example 1: Model Details • New model.
Example 1: Learnt Filters • Image featureslearnt – the usual suspects.
Example I: Conclusion • This convolutional net was the most successful algorithm on the ImageNet Challenge 2012. • It requires a very large amounts of data to train. • Devil is in the details (“tricks for neural networks”). • Algorithm implemented on Graphics Processing Units (GPUs) to deal with complexity of inference and learning.
Example II: Structured Discriminative Models. • Star Models : MacAllester, Felzenszwalb, Ramanan. 2008. • Objects are made from “parts” (not semantic parts). • Discriminative Models: • Hierarchical variant: L. Zhu, Y. Chen. et al. 2010. • Learning: latent support-vector machines. • Inference: window search plus dynamic programming. • Application: Pascal object detection challenge. 20,000 images, 20 objects. • Task: identify and localize (bounding box).
Example II: Mixture Models Parent-Child spatial constraints Parts: blue (1), yellow (9), purple (36) • Each Object is represented by six models – to allow for different viewpoints. • Energy function/Probabilistic Model defined on hierarchical graph. • Nodes represent parts which can move relative to each other enabling spatial deformations. • Constraints on deformations impose by potentials on the graph structure. Deformations of Horse Deformations of Car
Example II: Mixture Models: • Each object is represented by 6 hierarchical models (mixture of models). • These mixture components account for pose/viewpoint changes.
Example II: Features and Potentials • Edge-Like Cues: Histogram of Gradients (HOGs) • Appearance-Cues: Bag of Words Models (dictionary obtained by clustering SIFT or HOG features). • Learning: (I) weights for the importance of features, (ii) weights for the spatial relations between parts.
Example II: Learning by Latent SVM • The Graph Structure is known. • The training data is partly supervised. It gives image regions labeled by object/non-object. • But you do not know which mixture (viewpoint) component or the positions of the parts. These are hidden variables. • Learning: Latent Support Vector Machine (L SVM). • Learn the weights while simultaneously estimating the hidden variables (part positions, viewpoint).
Example II: Details (1) • Each hierarchy is a 3-layer tree. • Each node represents a part. • Total of 46 nodes: (1+9+ 4 x 9) • Each node has a spatial position (parts can “move” or are “active”) • Graph edges from parents to child – impose spatial constraints.
Example II: Details (2) • The object model has variables: • – represents the position of the parts. • – specifies which mixture component (e.g. pose). 3. – specifies whether the object is present or not. 4. – model parameter (to be learnt). • Note: during learning the part positions and the pose are unknown – so they are latent variables and will be expressed as
Example II: Details (3) • The “energy” of the model is defined to be: where is the image in the region. • The object is detected by solving: • If then we have detected the object. • If so, specifies the mixture component and the positions of the parts.
Example II: Details (4) • There are three types of potential terms (1) Spatial terms which specify the distribution on the positions of the parts. (2) Data terms for the edges of the object defined using HOG features. (3) Regional appearance data terms defined by histograms of words (HOWs – using grey SIFT features and K- means).
Example II: Details (5) • Edge-like: Histogram of Oriented Gradients HOGs (Upper row) • Regional: Histogram Of Words (Bottom row) • Dense sampling: 13950 HOGs + 27600 HOWs
Example II: Details (6) • To detect an object requiring solving: for each image region. • We solve this by scanning over the subwindows of the image, use dynamic programming to estimate the part positions and do exhaustive search over the
Example II: Details (7) • The input to learning is a set of labeled image regions. • Learning require us to estimate the parameters • While simultaneously estimating the hidden variables
Example II: Details (8) • We use Yu and Joachim’s (2009) formulation of latent SVM. • This specifies a non-convex criterion to be minimized. This can be re-expressed in terms of a convex plus a concave part.
Example II: Details (9) • Yu and Joachims (2009) propose the CCCP algorithm (Yuille and Rangarajan 2001) to minimize this criterion. • This iterates between estimating the hidden variables and the parameters (like the EM algorithm). • We propose a variant – incremental CCCP – which is faster. • Result: our method works well for learning the parameters without complex initialization.
Example II: Details (10) • Iterative Algorithm: • Step 1: fill in the latent positions with best score(DP) • Step 2: solve the structural SVM problem using partial negative training set (incrementally enlarge). • Initialization: • No pretraining (no clustering). • No displacement of all nodes (no deformation). • Pose assignment: maximum overlapping • Simultaneous multi-layer learning
Example II: Conclusion • All current methods that perform well on the Pascal Object Detection Challenge use these types of models. • Performance is fairly good for medium to large objects. Errors are understandable – cat versus dog, car versus train. • But seems highly unlikely that this is how humans perform these tasks – humans can probably learn from much less data). • The devil is in the details. Small “engineering” changes can yield big improvements. • Improved results by combining these “top-down” object models with “bottom-up” edge cues: Fidler, Mottaghi, Yuille, Urtasun. CVPR 2013.
Example III: Grammars/Compositional Models • Generative models of objects and scenes. • These models have explicit representation of parts – e.g., can “parse” objects instead of just detect them. • Explicit Representations – gives the ability to perform multiple tasks (arguably closer to human cognition). • Part sharing – efficiency of inference and learning. • Adaptive and Flexible. Can learn from little data. • Tyranny of Datatsets: “will they work on Pascal?”.
Example III: Generative Models • Basic Grammars (Grenander, Fu, Mjolsness, Biederman). • Images are generated from dictionaries of elementary components – with stochastic rules for spatial and structural relations.
Example III: Analysis by Synthesis • Analyze an image by inverting image formation. • Inverse problem: determine how the data was generated, how was it caused? • Inverse computer graphics.
Example III: Real Images • Image Parsing: (Z. Tu, X. Chen, A.L. Yuille, and S.C. Zhu 2003). • Learn probabilistic models for the visual patterns that can appear in images. • Interpret/understand an image by decomposing it into its constituent parts. • Inference algorithm: bottom-up and top-down.
Example III: Advantages • Rich Explicit Representations enable: • Understanding of objects, scenes, and events. • Reasoning about functions and roles of objects, goals and intentions of agents, predicting the outcomes of events. SC Zhu – MURI.
Example III: Advantages • Ability to transfer between contexts and generalize or extrapolate (e.g. , from Cow to Yak). Reduces hypothesis space – PAC Theory. • Ability to reason about the system, intervene, do diagnostics. • Allows the system to answer many different questions based on the same underlying knowledge structure. • Scale up to multiple objects by part-sharing.
Example III: Car Detection • Kokkinos and Yuille 2010. A 3-layer model. • Object made from parts – Car = Red-Part AND Blue-Part AND Green-Part • Parts are made by AND-ing contours. Red-Part=Con-1 AND Con-2… • These contours correspond to AND-ing tokens extracted from the image. The model has flexible geometry to deal with different types of cars: An SUV looks different than a Prius. Parts move relative to the object. Contours can move relative to the parts. Quantify this spatial variation by a probability distribution which is learnt from data.
Bottom-Up solution: Combine pieces until you build the car Does not exploit the box’ cover Top-Down solution: Try fitting each piece to the box’ cover. Most pieces are uniform/irrelevant Bottom-Up/Top-Down solution: Form car-like structures, but use cover to suggest combinations. Uses AI from MacAllester and Felzewnswalb. Example III: Analogy -- Building a puzzle