1 / 46

High-Accuracy Stereo Depth Maps Using Structured Light by: D. Scharstein & R. Szeliski

High-Accuracy Stereo Depth Maps Using Structured Light by: D. Scharstein & R. Szeliski. Presented by: Ali Agha March 02, 2009. Outline. Sterevision overview Motivation & Contribution Structured light & method overview Related work Disparity computation Results Conclusion Future work.

iain
Download Presentation

High-Accuracy Stereo Depth Maps Using Structured Light by: D. Scharstein & R. Szeliski

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High-Accuracy Stereo Depth Maps Using Structured Lightby: D. Scharstein & R. Szeliski Presented by: Ali Agha March 02, 2009

  2. Outline • Sterevision overview • Motivation & Contribution • Structured light & method overview • Related work • Disparity computation • Results • Conclusion • Future work

  3. STEREO VISION • When 3D information of a scene is needed

  4. Depth from Disparity • (xR-xL)/f = b/z • disparity=dRL =(xR-xL)

  5. Motivation of the presented paper • “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms”. Intl. J. Comp. Vis., 2002. • http://www.middlebury.edu/stereo/. Venus Tsukuba

  6. Motivation of the presented paper • need for more challenging scenes • accurate ground truth information??

  7. Contributions of this work • A method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information. • Does not require the calibration of the light sources • High resolution in comparison with range sensors

  8. Process Pipeline • This method uses structured light and consists of the following stages: • Acquire all desired views under all illuminations. • Rectify the images • Decode the light patterns at each pixel to compute correspondences. • Compute the view and illumination disparity and combine them

  9. Structured light • Structured-light techniques rely on projecting one or more special light patterns onto a scene, usually in order to directly acquire a range map of the scene http://en.wikipedia.org/wiki/File:1-stripesx7.svg

  10. Structured light • A pair of cameras and one or more light projectors are used http://en.wikipedia.org/wiki/File:1-stripesx7.svg

  11. Related Work in Decoding light patterns J. Batlle, E. Mouaddib, and J. Salvi. Recent progress in coded structured light as a technique to solve the correspondence problem: a survey. Pat. Recog., 31(7):963–982, 1998.

  12. Related work-CODED STRUCTURED LIGHT TECHNIQUES • Posdamer-Daltschuler1981-82-87

  13. Related work-CODED STRUCTURED LIGHT TECHNIQUES • Inokuchi, Sato and Matsuda 1984 8 bits temporally binary-coded pattern projection 8 bits temporally Gray-coded pattern projection

  14. Gray Code • Using such binary images requires log2(n) patterns to distinguish among n locations.

  15. Decoding the light patterns • Using average of all-white and all-black • In practice, the only reliable way is to project both the code pattern and its inverse. • In surfaces with widely varying reflection properties, use two different exposure times (0.5 and 0.1 sec.). • If this largest difference is still below a threshold, the pixel is labeled “unknown”

  16. Disparity computation • View disparities • Illumination disparities • Definition: • views – the images taken by the cameras • Illuminations – the structured light patterns projected onto the scene.

  17. View disparities • Assuming rectified views leads simple 1D search • Practical issues: • Occlusion • Unknown code values (due to shadows or reflections). • A perfect matching code value may not exist (interpolation errors) • Several perfect matching code values may exist (limited resolution)

  18. View disparities • The first problem (partial occlusion) is unavoidable • The number of unknown code values can be reduced by using more than one illumination source • As a final consistency check, we establish disparities dLRand dRLindependently and cross-check for consistency.

  19. View disparities view disparities scene under illumination

  20. Illumination disparities • disparity between the cameras and the illumination sources. • The difference in our case is that we can register these illumination disparities with our rectified view disparities dLRwithout the need to explicitly calibrate the illumination sources (video projectors).

  21. Illumination disparities • Relationship between the left view L and illum. source 0. • Each pixel whose view disparity has been established can be considered a (homogeneous) 3D scene point S=[x,y,d,1]with projective depth d = dLR(x, y). • The pixel’s illumination disparity (u0L, v0L) P = M0LS in which P = [u0L v0L1]

  22. Practical Issues • A small number of pixels with large disparity errors can strongly affect the least-squares fit. • Outlier detection by iterating the above process. • Only those pixels with low residual errors are selected as input to the next iteration.

  23. Illumination disparities • Given the projection matrix M0L, we can now solve equation for dLRat all pixels • Note that these disparities are available for all points illuminated by source 0, even those that are not visible from the right camera.

  24. Combining the disparity estimates • Remaining task is to combine the 2N + 2 disparity maps. • Create combined maps for each of L and R separately • Whenever there is a majority of values within close range, we use the average • otherwise, the pixel is labeled unknown. • L and R maps are checked for consistency, for unoccluded pixels, dLR(x, y) = − dRL(x + dLR(x, y), y),

  25. Combined disparity • Most stereo implementations work with much smaller image sizes. So, we downsample the images and disparity maps to quarter size (460 × 384). • Note that for the downsampled images, we now have disparities with quarter-pixel accuracy.

  26. Unknown Disparities • A remaining issue is that of holes, i.e., unknown disparity values • Small holes can be filled by interpolation • Large holes may remain in areas where no illumination codes were available to begin with. • Two main sources: • surfaces that show very low reflection • areas that are shadowed under all illuminations.

  27. Results • Two different scenes, Cones and Teddy.

  28. Experiments • In experimental setup, a single digital camera (Canon G1) translating on a linear stage, and one or two light projectors illuminating the scene from different directions.

  29. Results

  30. Verification • To verify that stereo data sets are useful for evaluating stereo matching algorithms, several of the algorithms from the Middlebury Stereo Page has been ran on our new images.

  31. Conclusion • a new methodology to acquire highly precise and reliable ground truth disparity measurements • camera-projector disparities, which can be used as an auxiliary source of information to increase the reliability of correspondences and to fill in missing data.

  32. Considerations for Future work • Exploiting in navigation • Field of view is limited by the range of light projector • Investigate the number of projected patterns which directly affect the speed of the method • In daylight or dark places • Invisible lights

  33. Thank youQuestions??

  34. Related work-CODED STRUCTURED LIGHT TECHNIQUES • Posdamer-Daltschuler1981-82-87

  35. Related work-CODED STRUCTURED LIGHT TECHNIQUES • Inokuchi, Sato and Matsuda 1984 8 bits temporally binary-coded pattern projection 8 bits temporally Gray-coded pattern projection

  36. Related work-CODED STRUCTURED LIGHT TECHNIQUES • Sato, Yamamoto and Inokuchi1986-87 • proposed to use a Liquid Crystal Device • which allows an increased • number of columns to be projected with • a high accuracy. The system also improves the coded • speed, against a slide projector, so the LCD can be • electronically controlled.

  37. If an object has a high textural contrast or any high • reßected surface regions, then, some pattern segmentation • errors can be produced. – Solution? • The problem of a light projector is sometimes a result • of heat irradiation onto the scene

  38. Related work-CODED STRUCTURED LIGHT TECHNIQUES • Hattori and Sato 1995 • replace the light projector with a semiconductor • laser, which gives a high power illumination • with low heat irradiation. The proposed system, • named Cubiscope The Cubiscope system

  39. Related work – Carrihill-Hummelk • Look at the notes

  40. Related work – Boyer-Kak • Colour

  41. Related work – Le Moigne-Waxman not-coded grid patterns

  42. Related work –Morita-Yakima-Sakata

  43. Related work –Vuylsteke-Oosterlinck

More Related