1 / 57

Introduction

Introduction. 3D scene flow is the 3D motion field of points in the world. Structure is the depth of the scene. Motivation of our work: Numerous applications including intelligent robots, human-computer interfaces, surveillance systems, dynamic rendering, dynamic scene interpretation, etc.

conlan
Download Presentation

Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction • 3D scene flow is the 3D motion field of points in the world. Structure is the depth of the scene. • Motivation of our work: Numerous applications including intelligent robots, human-computer interfaces, surveillance systems, dynamic rendering, dynamic scene interpretation, etc. • Challenges: • Absence of correspondences, image noises, structure ambiguities, occlusion, etc.

  2. System Block Diagram Camera 1 Camera 2 Camera N Image Sequence 1 Image Sequence 2 Image Sequence N Optical Flow Optical Flow Optical Flow Stereo Constraints 3D Affine Model Regularization Constraints 3D Scene Flow 3D Correspondences Dense Scene Structure

  3. Multiple Camera Geometry • A set of cameras provide N images. A 3D point in the world can be transformed to point by the relation, • Normally, one pair is used as basic stereo pair. All cameras are pre-calibrated. • Given an image point and its disparity, we can back-project it to the 3D world.

  4. Local Motion Model Selection Camera i : Frame t +1 Frame t 3D affine motion

  5. Local Motion Model Selection • To avoid overfitting and ensure convergence in each local region, we can assume the motion in consecutive S frames is similar over time. The difference is only a scaling factor. Then,

  6. Motion Model Fitting • Eliminate translation unknowns to avoid trivial solutions. • For remaining unknowns in each local region: • Non-linear model fitting by using Levenberg-Marquardt (LM) algorithm.

  7. Available Local Constraints

  8. Constraint Discussion • The EOF function is defined based on all the available constraints. • Optical flow constraints: • The projected 2D motion of 3D affine motion should be compatible with optical flow. • Stereo constraints: • The projected image location on different image planes of the same 3D scene point should have similar intensity patterns. Cross- correlation is used to measure this similarity.

  9. EOF Function • A 3D scene point is projected to different image planes of N cameras. The intensity patterns around the projective location should be similar. So,

  10. EOF Function • The EOF in local model fitting can be denoted as, LM algorithm is then used to minimize the EOF function.

  11. Regularization Constraints • To avoid overfitting, penalty constraint is added to large motion. This constraint is added to EOF function and used in every iteration.

  12. Initial Guesses • The unknown vector need to be initialized. By assuming small motion between two adjacent frames, we have • The initial structure (depth) value can be computed by a stereo algorithm.

  13. Complete Recursive Algorithm 1. Initialize unknown vector . Set . 2. If , carry out affine model fitting in each local region using LM algorithm. Smoothness constraint is not used. Set ; Else, add smoothness constraint into EOF function, then carry out affine model fitting in each local region. 3. If regularization constraints are less than a threshold or maximum number of iteration has been exceeded, end the algorithm. Else go to 2.

  14. Integrated 3D Scene Flow and Structure Recovery Experiments on Synthetic Data

  15. Integrated 3D Scene Flow and Structure Recovered Motion Fields

  16. Integrated 3D Scene Flow and Structure Ground Truth Validation

  17. Integrated 3D Scene Flow and Structure Experiments on Real Data

  18. Integrated 3D Scene Flow and Structure Recovered Motion Fields

  19. Experimental Results of Rule-Based Stereo Segmentation Map Top View Left View Right View

  20. Experimental Results of Rule-Based Stereo Initial Sparse Disparity Map Result After Applied Rule 1 and 2

  21. Experimental Results of Rule-Based Stereo Result by Using Our Method Result by Using A Direct Method

  22. Experimental Results of Rule-Based Stereo Confidence Map Occlusion Map

  23. Experimental Results of Sequential Formulation • Sample input images (only reference views are shown). Time t Time t+1

  24. Experimental Results of Sequential Formulation • Disparity results. Reference View Disparity Result

  25. Experimental Results of Sequential Formulation • Scene flow results. z motion of scene flow X-y projection of scene flow

  26. Experimental Results of Integrated Formulation • Disparity results. Reference View Disparity Result

  27. Experimental Results of Integrated Formulation • Scene flow results. z motion of scene flow X-y projection of scene flow

  28. Scheme Overview Local motion analysis module Global motion analysis module Local Nonrigid Motion Tracking Global Regulari- zation Local Nonrigid Motion Tracking Even Segmentation Structure Nonrigid motion 3D correspon- dences 2D Image Sequence Local Nonrigid Motion Tracking Global Constraints

  29. Local Affine Motion Model • Affine motion model assumed to remain the same for a short period of time; • A scaling factor, , is incorporated in order to compensate for possible temporal deviations.

  30. Frame (i+1) (R) Frame (i) (M) (R) Local EOF Function • Levenberg-Marquardt method is used to perform the EOF minimization. • Unknowns include affine parameters and the scaling factors.

  31. Cloud Image Acquisition GOES-8 and GOES-9 are focused on clouds; GOES-9 provides one view at approximately every minute. GOES-8 provides one view at approximately every 15 minutes; Both GOES-8 and GOES-9 have five multi-spectral channels.

  32. Experiments • Experiments have been performed on the GOES image sequences of Hurricane Luis, start from 09-06-95 at 1023 UTC to 09-06-95 at 2226 UTC.

  33. Experiments (cont.) • Although the initial mean errors are very large, they decrease very quickly after the global fluid constraints are applied. Stable results are achieved at the end of the iterations.

  34. Experiments on Simulation Images

  35. Results Validation

  36. Experiments on Real Images

  37. Reconstruction Results JeabMin_Tracking Jeab_render Lin Lin_render Qian Qian_render Ye Ye_render

  38. Results Validation Mean Error: 0.47006 Mean Error: 0.527872

  39. Wave Tank Experiment Experimental Setup Stereoscopic camera used to record video sequences of ice forming in the CRREL wave tank. Camera details: 15 fps B/W images at 320x240 pixel resolution 12 cm baseline with 255 pixel focal length Camera mounted on platform ~0.8 m above surface Multiple film segments captured at various stages of ice formation Several marker types (buoys, sprinkles) placed on the surface at various times

  40. Wave Tank Results Experiments Performed • Visualization via Anaglyphs • Ice Bucket – 3D images of small ice surfaces • Wave Tank - 3D images of ice in CRREL wave tank • Analysis • Ice Bucket - Surface reconstruction of bench-top ice • Wave Tank - Surface reconstruction of ice in CRREL wave tank

  41. Visualizations Steps to Creating an Anaglyph • Separate the color channels (RGB) • For each pixel in the anaglyph: • Take the Red value from the left image • Take the Green and Blue values from the right image • View the constructed image with filtered glasses. L image R image R1 G1 B1 R2 G2 B2 R1 G2 B2 Anaglyph

  42. Visualizations Ice Bucket Anaglyphs Ice pieces in small bucket Camera ~0.4 m from surface

  43. Visualizations Wave Tank Anaglyphs • Wave tank motion • Surface mostly solid • Frames pre-aligned

  44. Pre-study Examples Without calibration balls With calibration balls

  45. Stereo Analysis Ice Bucket Experiment Photographs taken in lab of ice in shallow bucket Ambient lighting Stereo camera Correspondences determined manually Matching points hand selected Determining matches in specular areas still difficult

  46. Stereo Results Nearest Neighbor Surface Depths calculated at given correspondence points All other points assigned the depth of nearest known point

  47. Stereo Results Thin Plate Spline Surface Depths calculated at given correspondence points All other points assigned the depth of nearest known point

  48. Current Results: Wave Tank Wave Tank Results Photographs taken at CRREL wave tank No special lighting used Camera mounted above tank, facing down Initial correspondences determined manually Matching points hand selected Tank walls and camera support provide context

  49. Current Results: Wave Tank Thin Plate Spline Surface Depths calculated at given correspondence points All other points interpolated from smoothing spline

  50. Stereo Analysis Algorithm Thin Plate Spline Surface With Iterative Warping Manually determine a set of correspondences Generate disparity surface using thin plate splines Warp the left image to the right image via the disparity surface Fill in any gaps in warped image Obtain dense stereo between the right and warped left images Update the disparity surface from the calculated dense stereo Iterate back to step 3 until the two images converge

More Related