1 / 44

Introduction to Computer Vision Stereo: Multiple Views and Depth Estimation

Learn about stereo vision and structure from motion in computer vision, and how multiple views can help in estimating 3D shape and depth. Explore concepts such as optical flow, shading, motion parallax, occlusion, and more.

ewebb
Download Presentation

Introduction to Computer Vision Stereo: Multiple Views and Depth Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE 185 Introduction to Computer Vision Stereo

  2. Multiple views • Taken at the same time or sequential in time • stereo vision • structure from motion • optical flow

  3. Why multiple views? • Structure and depth are inherently ambiguous from single views.

  4. Why multiple views? • Structure and depth are inherently ambiguous from single views. P1 P2 P1’=P2’ Optical center

  5. Shape from X • What cues help us to perceive 3d shape and depth? • Many factors • Shading • Motion • Occlusion • Focus • Texture • Shadow • Specularity • …

  6. Shading For static objects and known light source, estimate surface normals

  7. Focus/defocus Images from same point of view, different camera parameters far focused near focused 3d shape / depth estimates estimated depth map

  8. Texture estimate surface orientation

  9. Perspective effects perspective, relative size, occlusion, texture gradients contribute to 3D appearance of the scene

  10. Motion Motion parallax: as the viewpoint moves side to side, the objects at distance appear to move slower than the ones close to the camera

  11. Occlusion and focus

  12. Estimating scene shape “Shape from X”: Shading, Texture, Focus, Motion… Stereo: shape from motion between two views infer 3d shape of scene from two (multiple) images from different viewpoints scene point Main idea: image plane optical center

  13. Outline • Human stereopsis • Stereograms • Epipolar geometry and the epipolar constraint • Case example with parallel optical axes • General case with calibrated cameras

  14. Human eye Rough analogy with human visual system: Pupil/Iris – control amount of light passing through lens Retina - contains sensor cells, where image is formed Fovea – highest concentration of cones

  15. Human eyes fixate on point in space – rotate so that corresponding images form in centers of fovea. Human stereopsis: disparity

  16. Human stereopsis: disparity Disparity occurs when eyes fixate on one object; others appear at different visual angles

  17. Random dot stereograms Julesz 1960: Do we identify local brightness patterns before fusion (monocular process) or after (binocular)? To test: pair of synthetic images obtained by randomly spraying black dots on white objects

  18. Random dot stereograms

  19. Random dot stereograms 1. Create an image of suitable size. Fill it with random dots. Duplicate the image. 2. Select a region in one image. 3. Shift this region horizontally by a small amount. The stereogram is complete.

  20. Random dot stereograms When viewed monocularly, they appear random; when viewed stereoscopically, see 3d structure. Conclusion: human binocular fusion not directly associated with the physical retinas; must involve the central nervous system Imaginary “cyclopean retina”that combines the left and right image stimuli as a single unit High level scene understanding not required for Stereo

  21. Invented by Sir Charles Wheatstone, 1838 Stereo photograph and viewer Take two pictures of the same subject from two slightly different viewpoints and display so that each eye sees only one of the images. Image from fisher-price.com

  22. http://www.johnsonshawmuseum.org

  23. http://www.johnsonshawmuseum.org

  24. Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

  25.  present stereo images on the screen by simply putting the right and left images in an animated gif http://www.well.com/~jimg/stereo/stereo_list.html

  26. Autostereograms Exploit disparity as depth cue using single image. (Single image random dot stereogram, Single image stereogram)

  27. Autosteoreogram Designed to create the visual illusion of a 3D scene Using smooth gradients

  28. Stereo: shape from motion between two views We’ll need to consider: Info on camera pose (“calibration”) Image point correspondences Estimating depth with stereo scene point image plane optical center

  29. Stereo vision Two cameras, simultaneous views Single moving camera and static scene

  30. Camera frame 1 Camera parameters Camera frame 2 Extrinsic parameters: Camera frame 1  Camera frame 2 Intrinsic parameters: Image coordinates relative to camera  Pixel coordinates • Extrinsic params: rotation matrix and translation vector • Intrinsic params: focal length, pixel sizes (mm), image center point, radial distortion parameters We’ll assume for now that these parameters are given and fixed.

  31. Outline • Human stereopsis • Stereograms • Epipolar geometry and the epipolar constraint • Case example with parallel optical axes • General case with calibrated cameras

  32. Geometry for a stereo system • First, assuming parallel optical axes, known camera parameters (i.e., calibrated cameras):

  33. World point Depth of p image point (left) image point (right) Focal length optical center (right) optical center (left) baseline

  34. Assume parallel optical axes, known camera parameters (i.e., calibrated cameras). What is expression for Z? Geometry for a stereo system Similar triangles (pl, P, pr) and (Ol, P, Or): disparity

  35. Depth from disparity image I´(x´,y´) image I(x,y) Disparity map D(x,y) (x´,y´)=(x+D(x,y), y) So if we could find the corresponding points in two images, we could estimate relative depth…

  36. Basic stereo matching algorithm • If necessary, rectify the two stereo images to transform epipolar lines into scanlines • For each pixel x in the first image • Find corresponding epipolar scanline in the right image • Examine all pixels on the scanline and pick the best match x’ • Compute disparity x-x’ and set depth(x) = fB/(x-x’)

  37. Correspondence search Left Right • Slide a window along the right scanline and compare contents of that window with the reference window in the left image • Matching cost: SSD or normalized correlation scanline Matching cost

  38. Correspondence search Left Right scanline SSD

  39. Correspondence search Left Right scanline Norm. corr

  40. Effect of window size W = 3 W = 20 • Smaller window + More detail • More noise • Larger window + Smoother disparity maps • Less detail

  41. Failures of correspondence search Occlusions, repetition Textureless surfaces Non-Lambertian surfaces, specularities

  42. Results with window search Data Window-based matching Ground truth

  43. Beyond window-based matching • So far, matches are independent for each point • What constraints or priors can we add?

  44. Summary • Depth from stereo: main idea is to triangulate from corresponding image points. • Epipolar geometry defined by two cameras • We’ve assumed known extrinsic parameters relating their poses • Epipolar constraint limits where points from one view will be imaged in the other • Makes search for correspondences quicker • Terms: epipole, epipolar plane / lines, disparity, rectification, intrinsic/extrinsic parameters, essential matrix, baseline

More Related