1 / 19

Boundary Preserving Dense Local Regions

Boundary Preserving Dense Local Regions. Jaechul Kim and Kristen Grauman Univ. of Texas at Austin. Local feature detection. A crucial building block for many applications. Image retrieval. Object recognition. Image matching. Key issue:.

kapila
Download Presentation

Boundary Preserving Dense Local Regions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Boundary Preserving Dense Local Regions Jaechul Kim and Kristen Grauman Univ. of Texas at Austin

  2. Local feature detection • A crucial building block for many applications Image retrieval Object recognition Image matching Key issue: How to detect local regions for feature extraction?

  3. Related work Interest point detectors e.g., Matas et al. (BMVC 02), Jurie and Schmid (CVPR 04), Mikolajczyk and Schmid (IJCV 04) Dense sampling e.g., Nowak et al. (ECCV 06) Segmented regions and Superpixels e.g., Ren and Malik (ICCV 03) , Gu et al. (CVPR 09), Todorovic and Ahuja (CVPR 08), Malisiewicz and Efros (BMVC 07), Levinshtein et al. (ICCV 09) Hybrid e.g., Tuytelaars (CVPR 10), Koniusz and Mikolajczyk (BMVC 09)

  4. What makes a good local feature detector? Desired properties: - Repeatable - Boundary-preserving - Distinctively shaped Existing methods lackone or more of these criteria, e.g., Lack repeatability Lack distinctive shape, straddle boundaries Segments Dense sampling Interest points

  5. Our idea: Boundary Preserving Local Regions (BPLRs) • Boundary preserving, dense extraction • Segmentation-driven feature sampling and linking Repeatable local features capturing objects’ local shapes

  6. Approach: Overview Sampling elements Initial elements for each segment are sampled based on distance transform of the segment A segment Sampled elements Linking elements A single graph structure reflecting main shapes and segment layout Min. spanning tree Grouping elements Grouping neighboring elements into BPLR Neighbor elements BPLR

  7. Sampling Linking Grouping Approach: Sampling x x Zoom-in view An ”element” Input image Sampled elements from “all” segments Distance transform Dense regular grid Segment Sampled elements

  8. Sampling Linking Grouping Approach: Linking Minimum spanning tree Sampled elements’ locations (i.e., elements’ centers) Global linkage structure

  9. Sampling Linking Grouping Role of spanning tree linkage Min spanning tree prefers to link closer elements + Multiple sampling Due to distance transform-based sampling same-segment elements more likely linked Due to multiple segmentations elements in overlapping segments more likely linked

  10. Sampling Linking Grouping Approach: Grouping Euclidean neighbor BPLR Topological neighbor Neighbor elements Reference element’s location Reference element’s location Reference element’s location Reference element’s location Descriptor Euclidean neighbor elements’ location Intersection of topology and Euclidean neighbor Topological neighbor elements’ location Intersection of topology and Euclidean neighbor Zoom-in view Reference element’s location Example detections of BPLRs (Subset shown for visibility) Reference element’s location

  11. Example matches of BPLRs Leak object boundary

  12. Experiments • 20-200 segments  ~7000 BPLRs in 400 x 300 image • 2-5 seconds to extract BPLRs per an image • PHOG + gPb descriptor used • Baselines: • Dense sampling (+ SIFT) • MSER (+ SIFT) [1] • Semi-local regions (+ SIFT) [2,3] • Segmented regions (+ PHOG) [4] • Superpixels[5] • Tasks: • Repeatability • Localization • Foreground segmentation • Object classification [1] Matas et al., BMVC 02. [2] Quack et al., ICCV 07. [3] Lee and Grauman, IJCV 09. [4] Arbelaez et al., CVPR 09. [5] Ren and Malik, ICCV 03.

  13. Example feature extractions Proposed BPLRs (Subset shown for visibility) Segmented regions Superpixels Dense sampling Interest regions (MSERs)

  14. Repeatability for object categories Test image Bounding Box Hit Rate – False Positive Rate [Quack et al. 2007] Applelogo Bottle Giraffe Mug Train images Swan True match False positive Comparison to baseline region detectors on ETHZ shape classes

  15. Localization accuracy Bounding Box Overlapping Score – Recall Compute overlapping score by projecting the training exemplar’s bounding box into the test image Bottle Giraffe Swan Applelogo Mug Comparison to baseline region detectors on ETHZ shape classes

  16. Localization accuracy Test image Database images with best matches to test BPLRs

  17. Foreground segmentation Replacing superpixels with BPLRs in GrabCut segmentation Foreground segmentation in Caltech-28 dataset

  18. Object classification Nearest-neighbor results on Caltech-101 benchmark Comparison of features using the same Naïve Bayes NN [Boiman et al. 2008] classifier.

  19. Conclusion Dense local detector that preserves object boundaries • Capture object’s local shape in a repeatable manner • Feature sampling and linking driven by segmentation • Generic bottom-up extraction Code available: http://vision.cs.utexas.edu/projects/bplr/bplr.html

More Related