1 / 15

Edge Preserving Spatially Varying Mixtures for Image Segmentation

Edge Preserving Spatially Varying Mixtures for Image Segmentation. by. Giorgos Sfikas, Christophoros Nikou, Nikolaos Galatsanos. (CVPR 2008). Presented by Lihan He ECE, Duke University Feb 23, 2009. Outline. Introduction Edge preserving spatially varying GMM Inference using MAP-EM

danika
Download Presentation

Edge Preserving Spatially Varying Mixtures for Image Segmentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Edge Preserving Spatially Varying Mixturesfor Image Segmentation by Giorgos Sfikas, Christophoros Nikou, Nikolaos Galatsanos (CVPR 2008) Presented by Lihan He ECE, Duke University Feb 23, 2009

  2. Outline • Introduction • Edge preserving spatially varying GMM • Inference using MAP-EM • Experimental results • Conclusion 2/15

  3. Introduction Image segmentation Clustering pixels or super pixels such that the same group has common characteristics (same objective, similar texture) • Adjacent pixels most likely belong to the same cluster; • Edge of objectives. GMM: no prior knowledge is exploited SVGMM (spatially variant GMM): • Spatial smoothness is imposed in the neighborhood of each pixel based on the Markov random field; • Without considering the edge of textures 3/15

  4. Introduction In this paper • Hierarchical Bayesian model; • Spatially varying GMM: mixing weights are different for different pixels; • Difference of mixing weights for two neighbored pixels follows a student-t distribution; • Heavy tailed student-t preserves edges of textures; • MAP-EM is used for model inference. 4/15

  5. St-SVGMM Feature vector for each pixel: SVGMM: Each pixel has its own mixing weights weights Each pixel xn: indicator variables Likelihood: Prior: 5/15

  6. St-SVGMM Prior for mixing weight π: d=2 d: neighborhood adjacency type d=1: horizontald=2: vertical d=1 γd(n): the set of neighbors of pixel n, with respect to the dth adjacency type K×D different student-t distributions are introduced, with hyperparameters Joint prior for π: 6/15

  7. St-SVGMM The student-t distribution can be modeled by introducing the latent variable plays an important role: neighboring pixels n, k belong to the same cluster n, k are at the edge of two clusters n – edge location k (d) – adjacency type (horizontal or vertical) j – cluster index (edges of which cluster) 7/15

  8. St-SVGMM Model summary 8/15

  9. Inference MAP-EM algorithm for model inference. Model parameters: Completelog-likelihood E-step (update Z, U) 9/15

  10. Inference M-step ( update ) = 10/15

  11. Results U-variable maps n – edge location k (d) – adjacency type j – cluster index K=3 clusters j=1: sky Brighter regions represent lower values – edges. j=2: roof & shadows j=3: building d=2: vertical d=1: horizontal 11/15

  12. Results Comparison on 300 images of the Berkeley image database Statistics on the Rand Index (RI) (measuring the consistency between the ground truth and the segmentation map); higher is better. Statistics on the boundary displacement error (BDE) (measuring error of boundary displacement with respect to the ground truth); lower is better. 12/15

  13. Results Segmentation examples K=5 K=10 original image K=15 13/15

  14. Results K=5 K=10 original image K=15 14/15

  15. Conclusion • Proposed a GMM-based clustering algorithm for image segmentation; • Used smoothness prior to consider the adjacent pixels belonging to the same cluster; • Also captured the image edge structure (no smoothness enforced across segment boundaries); • All required parameters are estimated from the data (no requirement of empirical parameter selection). • Next: automatically estimating the number of components K. 15/15

More Related