1 / 31

Image Categorization by Learning and Reasoning with Regions

Image Categorization by Learning and Reasoning with Regions. Yixin Chen, University of New Orleans James Z. Wang, The Pennsylvania State University Published on Journal of Machine Learning Research 5 (2004). Presented by: Jianhui Chen 09/26/2006. Contents. Introduction

uri
Download Presentation

Image Categorization by Learning and Reasoning with Regions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Categorization by Learning and Reasoning with Regions Yixin Chen, University of New Orleans James Z. Wang, The Pennsylvania State University Published on Journal of Machine Learning Research 5 (2004) Presented by: Jianhui Chen 09/26/2006

  2. Contents • Introduction • Image segmentation and representation • An extension of Multiple Instance Learning - Diverse Density SVM (DD-SVM) • Comparison between DD-SVM & MI-SVM • Experimental results

  3. Introduction • Image categorization • Images (a) and (b) are mountains and glaciers images. • Images (c), (d) and (e) are skiing images. • Images (f) and (g) are beach images.

  4. Introduction • A new learning technique for region-based image categorization. • An extension from Multiple-Instance Learning (MIL) : DD-SVM.

  5. Image Segmentation and Representation • Basic steps Step 1 - Divide the images into subblocks and extract LUV features. Step 2 - Do clustering using k-means and form regions. Step 3 - Form features vectors for regions (Classes).

  6. Image Segmentation and Representation • Step 1: (1) Partition the image into non-overlapping blocks of size 4 x 4 pixels. (2) Extract LUV features from each of the blocks, denoted as L, U, V.

  7. Image Segmentation and Representation • Step 1: (3)Apply Daubechies-4 wavelet transform and compute features from LH, HL, HH bands as fhl, flh and fhh. Suppose the coefficients are {ckl, ck,l+1, ck+1,l, cK+1,l+1}, the feature is computed as: (4) Form feature vector for each of the subblocks as:

  8. Image Segmentation and Representation • Step 2: (1) Apply k-means and do clustering. (2) Each resulting class corresponds to one region.

  9. Image Segmentation and Representation • Step 3: (1) Compute the mean of the feature vectors for each region. (2) Compute the normalized inertia of order 1,2,&3 for each region. Normalized inertia Shape feature of region Rj Feature vector on Region Rj

  10. An Extension of Multiple Instance Learning • Basic idea of DD-SVM: (1) An images is referred as a bag which consists of instances. (2) Each bag is mapped to a point in a new feature space. (3) Standard SVMs are trained in the bag feature space.

  11. An Extension of Multiple Instance Learning • Diverse-Density SVM (DD-SVM) (1) Maximum Margin Formulation of MIL. (2) Construct bag feature space based Diverse Density. (3) Compute region features vectors from instance features vectors. (4) A label is attached to a bag, instead of instances.

  12. An Extension of Multiple Instance Learning • Objective function for DD-SVM: , define bag feature space. , a kernel function. C: controls the trade-off between accuracy and regularization.

  13. An Extension of Multiple Instance Learning • The bag classifier is defined by as: * Assume the bag feature space is given.

  14. An Extension of Multiple Instance Learning • Constructing a Bag Feature Space (1) Diverse Density (2) Learning Instance Prototypes (3) Computing Bag Features

  15. Constructing a Bag Feature Space • Diverse Density x is a point in the instance feature space; W is a weight vector; Ni is the number in the i-th bag;

  16. Constructing a Bag Feature Space • Learning Instance Prototypes (1) A large value of DD at a point indicates it may fit better with the instances from positive bags than with those from negative bags. (2) Choose local maximizers as instance prototypes. (2) An instance prototype represents a class of instances that is more likely to appear in positive bags than in negative bags.

  17. Constructing a Bag Feature Space • Learning Instance Prototype

  18. Constructing a Bag Feature Space

  19. Computing Bag features (1) Each bag feature is defined by one instance prototype and one instance from the bag. (2) A bag feature gives the smallest distance between any instance and the corresponding instance prototype. (3) A bag feature can be viewed as a measure of the degree that an instance prototype shows up in the bag.

  20. Comparison between DD-SVM & MI-SVM • Learning process of DD-SVM (1) Input is a collection of bags with binary labels. (2) Output is SVM classifier.

  21. Comparison between DD-SVM & MI-SVM • Learning process of MI-SVM

  22. Comparison between DD-SVM & MI-SVM • Learning process of MI-SVM Input: A collection of labeled bags. Output: a SVM classifier.

  23. Comparison between DD-SVM & MI-SVM • DD-SVM: (1)Negative bag – all instances are negative (2) Positive bag – at least one instance is positive • MI-SVM: (1)One instance is selected to represent the whole positive bag. (2) Train SVM using all of the negative instances and selected positive instances.

  24. Experimental Results • Experimental Setup and Data set (1) The data set consists of 2000 images from 20 image categories. (2) All images are in JPEG format with the size of 384 x 256 or 256 x 384. (3) Manually set parameters, ie. C (4) Comparison among DD-SVM, MI-SVM & Hist-SVM

  25. Experimental Results • Categorization Result in terms of accuracy

  26. Experimental Results • Classification result in terms of confusion matrix

  27. Experimental Results All “Beach” images contain mountains or mountain-like regions All “Mountain and glaciers” images contain regions corresponding to river, lake or oceans.

  28. Experimental Results • Sensitivity to image segmentation

  29. Experimental Results • Sensitivity to the number of Categories in the Data Set

  30. Experimental Results • Sensitivity to the size and diversity of training images

  31. Thank you !

More Related