1 / 22

Learning Scale-Invariant Contour Completion

Learning Scale-Invariant Contour Completion. Xiaofeng Ren, Charless Fowlkes and Jitendra Malik. Abstract.

justus
Download Presentation

Learning Scale-Invariant Contour Completion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning Scale-Invariant Contour Completion Xiaofeng Ren, Charless Fowlkes and Jitendra Malik 1

  2. Abstract We present a model of curvilinear grouping using piecewise linear representations of contours and a conditional random field to capture continuity and the frequency of different junction types. Potential completions are generated by building a constrained Delaunay triangulation (CDT) over the set of contours found by a local edge detector. Maximum likelihood parameters for the model are learned from human labeled groundtruth. Using held out test data, we measure how the model, by incorporating continuity structure, improves boundary detection over the local edge detector. We also compare performance with a baseline local classifier that operates on pairs of edgels. Both algorithms consistently dominate the low-level boundary detector at all thresholds. To our knowledge, this is the first time that curvilinear continuity has been shown quantitatively useful for a large variety of natural images. Better boundary detection has immediate application in the problem of object detection and recognition. 2

  3. Boundary Detection • Edge detection: 20 years after Canny • Pb (Probability of Boundary): learning to combine brightness, color and texture contrasts • There is psychophysical evidence that we might have been approaching the limit of local edge detection 3

  4. Curvilinear Continuity • Boundaries are smooth in nature • A number of associated phenomena • Good continuation • Visual completion • Illusory contours • Well studied in human vision • Wertheimer, Kanizsa, von der Heydt, Kellman, Field, Geisler, … • Extensively explored in computer vision • Shashua, Zucker, Mumford, Williams, Jacobs, Elder, Jermyn, Wang, … • Is the net effect of completion positive? Or negative? Lack of quantitative evaluation 4

  5. Scale Invariance • Sources of scale invariance arbitrary viewing distance hierarchy of parts • Power laws in natural images • Lots of findings, e.g. in power spectra or wavelet coefficients (Ruderman, Mumford, Simoncelli, …) • Also in boundary contours [Ren and Malik 02] • How to incorporate scale-invariance? 5

  6. A Scale-Invariant Representation • Piecewise linear approximation of low-level contours • recursive splitting based on angle • Constrained Delaunay Triangulation • a variant of the standard Delaunay Triangulation • maximizes the minimum angle (avoids skinny triangles) 6

  7. The CDT Graph scale-invariant fast to compute <1000 edges completes gaps little loss of structure 7

  8. No Loss of Structure Use Phuman the soft groundtruth label defined on CDT graphs: precision close to 100% Pb averaged over CDT edges: no worse than the orignal Pb Increase in asymptotic recall rate: completion of gradientless contours 8

  9. CDT vs. K-Neighbor Completion An alternative scheme for completion: connect to k-nearest neighbor vertices, subject to visibility CDT achieves higher asymptotic recall rates 9

  10. Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Xe Inference on the CDT Graph Local inference: Xe Xe Global inference: 10

  11. Baseline Local Model “Bi-gram” model: “Tri-gram” model: binary classification (0,0) vs (1,1) 1 2 Xe   L L contrast + continuity  = PbL logistic classifier 11

  12. Global Model w/ Conditional Random Fields • Graphical model with expoential potential functions edge potentials exp(i) junction potentials exp(j) • Inference with loopy belief propagation converges < 10 iterations • Maximum likelihood learning (convex) with gradient descent 12

  13. Junctions and Continuity • Junction types (degg,degc): degg=0,degc=0 degg=1,degc=0 degg=0,degc=2 degg=1,degc=2 • Continuity term for degree-2 junctions  degg+degc=2 13

  14. Continuity improves boundary detection in both low-recall and high-recall ranges Global inference helps; mostly in low-recall/high-precision Roughly speaking, CRF>Local>CDT only>Pb 14

  15. 15

  16. 16

  17. Image Pb Local Global 17

  18. Image Pb Local Global 18

  19. Image Pb Local Global 19

  20. Conclusion • Constrained Delaunay Triangulation is a scale-invariant discretization of images with little loss of structure; • Moving from 100,000 pixels to <1000 edges, CDT achieves great statistical and computational efficiency; • Curvilinear Continuity improves boundary detection; • the local model of continuity is simple yet very effective • global inference of continuity further improves performance • Conditional Random Fields w/ loopy belief propagation works well on CDT graphs • Mid-level vision is useful. 20

  21. Thank You 21

  22. 22

More Related