1 / 56

Fast Cost-volume Filtering For Visual Correspondence and Beyond

Fast Cost-volume Filtering For Visual Correspondence and Beyond. Asmaa Hosni, Member, IEEE, Christoph Rhemann, Michael Bleyer, Member, IEEE, Carsten Rother, Member, IEEE, and Margrit Gelautz,Senior Member, IEEE IEEE Transactions on Pattern Analysis and Machine Intelligence,2013. Outline.

poppy
Download Presentation

Fast Cost-volume Filtering For Visual Correspondence and Beyond

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fast Cost-volume Filtering For Visual Correspondence and Beyond Asmaa Hosni, Member, IEEE, Christoph Rhemann, Michael Bleyer, Member, IEEE, Carsten Rother, Member, IEEE, and Margrit Gelautz,Senior Member, IEEE IEEE Transactions on Pattern Analysis and Machine Intelligence,2013

  2. Outline • Introduction • Related Work • Stereo Matching • Optical Flow • Interactive Image Segmentation • Method :Cost-Volume Filtering • Applications • Experimental Results • Conclusion

  3. Introduction

  4. Introduction • Many computer vision tasks can be formulated as labeling problems. • Solution : a spatially smooth labeling where label transitions are aligned with color edges. => very fast edge preserving filter • A generic and simple framework • (i) constructing a cost volume • (ii) fast cost volume filtering • (iii) winner-take-all label selection

  5. Related Work

  6. Related Work--Stereo Matching • Global method • Energy minimization process (GC,BP,DP,Cooperative) • Per-processing (mean-sift) • Accurate but slow • Local method • A local support region with winner take all • Fast but inaccurate.

  7. Related Work--Optical Flow • Optical flow: the pattern of apparent motion caused by the relative motion between an observer and the scene. • A vector field subject to Image Brightness Constancy Equation (IBCE) • Application: motion detection, object segmentation, motion compensated encoding, and stereo disparity measurement......

  8. Related Work--Optical Flow • Temporal Persistence

  9. Related Work--Optical Flow • Each flow vector is a label and subpixel accuracy further increases the label space • Method:continuous optimization strategies (coarse-to-fine) • SSD • Local convolution with oriented Gaussians[13] • Local convolution with bilateral filter[14] • Adaptive support weights[15,16] • Trade off search space (quality) against speed. [13] D. Tschumperle` and R. Deriche, “Vector-Valued Image Regularization with PDE’s: A Common Framework for Different Applications,” CVPR, 2003. [14] J. Xiao, H. Cheng, H. Sawhney, C. Rao, and M. Isnardi, “Bilateral Filtering-Based Optical Flow Estimation with Occlusion Detection,” ECCV, 2006. [15] M. Werlberger, T. Pock, and H. Bischof, “Motion Estimation with Non-Local Total Variation Regularization,” CVPR, 2010. [16] D. Sun, S. Roth, and M. Black, “Secrets of Optical Flow Estimation and Their Principles,” CVPR,2010.

  10. Related Work--Interactive Image Segmentation • Interactive Image Segmentation is a binary labeling problem. • Goal:Separate the image into foreground and background regions given some hints by the user. • Method: • SNAKE • Geodesic morphological operator [8,21] • Alpha matte [5,22] [5] K. He, J. Sun, and X. Tang, “Guided Image Filtering,” ECCV, 2010. [8] A. Criminisi, T. Sharp, and C. Rother,“Geodesic Image and Video Editing,” ACM Graphics, 2010. [21] A. Criminisi, T. Sharp, and A. Blake, “GeoS: Geodesic Image Segmentation,”ECCV, 2008. [22] C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A Perceptually Motivated Online Benchmark for Image Matting,” CVPR, 2009.

  11. Cost-volume Filtering

  12. Edge-preserving filtering • Edge-preserving filteringmethods • Weighted Least Square [Lagendijk et al. 1988] • Anisotropic diffusion [Perona and Malik 1990] • Bilateral filter [Aurich and Weule 95], [Tomasi and Manduchi 98] • Digital TV (Total Variation) filter [Chan et al. 2001]

  13. Bilateral filter [6] [6] K. Yoon and S. Kweon, “Adaptive Support-Weight Approach for Correspondence Search,” IEEE Trans. Pattern Analysis and Machine Intelligence, Apr. 2006.

  14. Bilateral filter • Advantages • Preserve edges in the smoothing process • Simple and intuitive • Non-iterative • Disadvantages • Complexity O(Nr2) • Gradient distortionPreserves edges,but not gradients

  15. Guided filter [5] [5] K. He, J. Sun, and X. Tang, “Guided Image Filtering,” Proc. European Conf. Computer Vision, 2010.

  16. Guided filter Bilateral filter does not have this linear model

  17. Guided filter

  18. Bilateral filter V.S. Guided filter

  19. Guided filter • Edge-preserving filtering • Non-iterative • O(1) time, fast and non-approximate • No gradient distortion Advantages of bilateral filter Disadvantages of bilateral filter

  20. Cost-volume Filtering w=(2r+1)^2 r • Label l from • C’ : the filtered cost volume • i and j : pixel indices. • Wi,j : The filter weights depend on the guidance image I •      :the mean vector and covariance of I • : a smoothness parameter • U : Identity matrix • Winner take all: r

  21. Cost-volume Filtering

  22. [5] K. He, J. Sun, and X. Tang, “Guided Image Filtering,” Proc. European Conf. Computer Vision, 2010. [6] K. Yoon and S. Kweon, “Adaptive Support-Weight Approach for Correspondence Search,” IEEE Trans. Pattern Analysis and Machine Intelligence, Apr. 2006. [9] F. Crow. Summed-area tables for texture mapping. SIGGRAPH, 1984. Cost-volume Filtering • Zoom of the green line in the input image. • Slice of the cost volume for the line(white/black/red: high/low/lowest costs) • The box filter [9] • The joint bilateral filter [6] • The guided filter [5] • Ground-truth labeling

  23. Application

  24. Stereo Matching

  25. Stereo Matching • Cost computation • : grayscale gradients in x direction • : balances the color and gradient terms • : truncation values

  26. Stereo Matching • Cost computation • : grayscale gradients in x and y direction • : balances the color and gradient terms • : truncation values • Occlusion detection :

  27. Stereo Matching • Cost computation • : grayscale gradients in x and y direction • : balances the color and gradient terms • : truncation values • Occlusion detection : • Postprocessing 1.Scanline filling : the lowest disparity of the spatially closest nonoccluded pixel 2.Median filter : • : adjust the spatial and color similarity • : normalization factor

  28. Stereo Matching • Alternative—symmetric cost aggregation • The cost aggregation can be formulated symmetrically for both input images. • Replace the 3*1 vectorIiin (3) with a 6*1 vector whose entries are given by the RGB color channels of bothIiand I’i-l.

  29. Stereo Matching • Effect of postprocessing Disparity map with invalidated pixels in red After scanline-based filling After median filtering

  30. Optical flow • Cost computation • : grayscale gradients in x and y direction • : balances the color and gradient terms • : truncation values • Postprocessing:Median filter and iteration

  31. Optical flow • Upscale : To find subpixel accurate flow vectors. • Color of subpixel filled with bicubicinterpolation.

  32. Interactive Image Segmentation •       :foreground F or background B • :background color histograms [25] • b(i):the bin of pixel i • Cost computation : (binary labeling) => • Five iterations [26] [25] S. Vicente, V. Kolmogorov, and C. Rother, “Joint Optimization of Segmentation and Appearance Models,” Proc. IEEE Int’l Conf. Computer Vision, 2009. [26] C. Rother, V. Kolmogorov, and A. Blake, “Grabcut-Interactive Foreground Extraction Using Iterated Graph Cuts,” Proc. ACM Siggraph, 2004.

  33. Experimental Results

  34. Experimental Results • Device:an Intel Core 2 Quad, 2.4 GHZ PC with GeForce GTX480 graphics card with 1.5GB of memory from NVIDIA. • Settingsparameters: : depends on the signal-to-noise ratio of an image • 0.008 for stereo • 0.016 for optical flow • Source : Middlebury http://vision.middlebury.edu/stereo/

  35. Stereo Matching

  36. Stereo Matching

  37. Stereo Matching

  38. Stereo Matching • disparity maps without occlusion filling and postprocessing (invalidated pixels are black)

  39. Stereo Matching [27] A. Hosni, M. Bleyer, M. Gelautz, and C. Rhemann, “Local Stereo Matching Using Geodesic Support Weights,” Int’l Conf. Image Processing, 2009. [6] K. Yoon and S. Kweon, “Adaptive Support-Weight Approach for Correspondence Search,” PAMI , 2006. [7] C. Richardt, D. Orr, I. Davies, A. Criminisi, and N. Dodgson, “Real-Time Spatiotemporal Stereo Matching Using the Dual-Cross-Bilateral Grid,”ECCV,2010.

  40. Stereo Matching

  41. Stereo Matching

  42. Stereo Matching • Million Disparity Estimations per second (MDE/s) • W : width • H : height • D : disparity levels • FPS: number of frames per second • A larger MDE number means a better performing system.

  43. Stereo Matching

  44. Stereo Matching

  45. Stereo Matching Table 2. Rankings and run times for selected local stereo methods. Run times in the table are averaged over the four Middlebury test images. *The run time in [15] was reported before left-right consistency check in the corresponding paper. Hence, for fairness, we have multiplied the reported time by a factor of 2.

  46. Stereo Matching

  47. Optical flow

  48. Optical flow The respective AEE and AAE are given in parentheses (AEE/AAE).

  49. Optical flow Comparison of two sequences with thin structure, where many competitors fail to preserve flow discontinuities. (We boosted the colors in the second row from the top for better visualization.)

  50. Optical flow • Large displacement flow • (b)-(e) and (l)-(o) Motion magnitude for different methods. • (f), (p) Our flow vectors with the color coding as in middleburry. • (h)-(k) Backward warping results using flow of different methods—the tip of the foot is correctly recovered by our method. • Occluded regions cannot be handled by any method.

More Related