What does the scene look like from a scene point
Download
1 / 30

What Does the Scene Look Like From a Scene Point? - PowerPoint PPT Presentation


  • 71 Views
  • Uploaded on

What Does the Scene Look Like From a Scene Point?. M. Irani, T. Hassner, and P. Anandan ECCV 2002. Donald Tanguay August 7, 2002. Overview. Categorization of novel view synthesis Outline of approach Planar parallax formulation Synthesizing the virtual view Practical simplification Results

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' What Does the Scene Look Like From a Scene Point?' - daniel-potts


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
What does the scene look like from a scene point

What Does the Scene Look Like From a Scene Point?

M. Irani, T. Hassner, and P. AnandanECCV 2002

Donald TanguayAugust 7, 2002


Overview
Overview

  • Categorization of novel view synthesis

  • Outline of approach

  • Planar parallax formulation

  • Synthesizing the virtual view

  • Practical simplification

  • Results

  • Assessment


Novel view synthesis
Novel View Synthesis

Breakdown into 3 categories:

  • 3D reconstruction

  • View transfer

  • Sampling methods


3d reconstruction
3D Reconstruction

  • Fully reconstruct scene, then render view

  • Geometric error criteria do not translate well to errors in novel view

  • Reconstruction and rendering occur in different coordinate systems

  • Problems amplified with novel viewpoints significantly different from real cameras


View transfer
“View Transfer”

  • For example: 2 images, dense correspondence, and trifocal tensor

  • Avoids reconstruction

  • Errors in correspondence

  • Synthesis uses forward warping step, which results in “hole-filling” at surface discontinuities

  • Problems amplified by severe changes in viewpoint


Sampling methods
Sampling Methods

  • E.g., lightfield and lumigraph

  • Avoid reconstruction and correspondence

  • Require very large sampling of view-space

  • Data acquisition is problematic

  • Space-time costs are impractical


Features of their method
Features of Their Method

  • Avoids reconstruction, correspondence

  • Backward (“inverse”) warping avoids holes

  • Optimizes errors in coordinate system of novel view

  • Handles significantly different viewpoints

  • Small number of input images (~10)


Typical scenario
Typical Scenario

Choose a scene point Vfrom which to look.


Imaging geometry
Imaging Geometry

Black point in eachimage is the virtualepipole (image ofselected COP).


Color consistency test
Color Consistency Test

If projections were aligned, matching determines the correct color:

  • However:

  • Only one correspondence is known – the virtual epipole

  • All other correspondences are warped because the lines are in different coordinate systems.


Overview of approach
Overview of Approach

  • Choose virtual viewpoint V (a scene point)

  • For each pixel in the virtual image:

    • Calculate the line of sight L

    • Map images of L into a common coordinate system

    • Stack the colorings of L for comparison

    • Select the first consistent color as the color of the pixel


Mapping onto a common coordinate system
Mapping onto a Common Coordinate System

One camera is selected as the reference camera R.Projections of L in all other cameras will be mapped into R’s image.


Imaging camera 1
Imaging Camera 1

Transform the line in C1 into R by the homographyinduced by the ground plane.


Imaging camera 3
Imaging Camera 3

Geometrically, the homography displaces each pixel in Ci asthough the corresponding 3D point was on the ground plane.The “piercing point” always maps to the same point in R.


Pencil of lines in reference camera
Pencil of Lines inReference Camera

After plane alignment, the lines in the reference camera fanfrom the imaged piercing point to the virtual epipoles.


Projective geometry review
Projective Geometry Review

  • Homography:

    • A.k.a. Collineation, projective transformation

    • In P3: 3x4 matrix with 11 degrees of freedom

  • Points and lines:

    • Point x lies on line lx†l  0

    • Intersection of lines l and m is point plm

    • Line joining points p and q is lpq


Line configuration
Line Configuration

In R’s image plane, whatis the relationship betweenblue and red lines?


Line alignment
Line Alignment

red

green

Given real epipoles ei and virtual epipoles vi: for any axis point pV,Mi is the projective transformation that brings each line li into alignment with lR.


Virtual view
Virtual View

Hsyn is the homography between the

synthesized view and the reference view R.

  • Position is fixed by the virtual epipoles

  • Free parameters (can be specified in Hsyn):

    • Orientation (look direction)

    • Intrinsic parameters (e.g., zoom)


Virtual epipoles
Virtual Epipoles

In an uncalibrated setting, the position of the virtual

camera can be specified in several ways:

  • Manually pin-point same scene point in all cameras.

  • Pin-point in two images and geometrically infer in others using “trifocal constraints.”

  • Pin-point in one image and use correlation techniques to find correspondence in others



Algorithm outline
Algorithm Outline

For each pixel p in the synthesized image:

  • Find the imaged piercing point pv= Hsyn·p.

  • Align all imaged lines of sight using the line-to-line transformations Mi.

  • Find the first color-consistent column.

  • Assign pixel this color.


Color consistency
Color Consistency

  • Assume Lambertian objects.

  • A is a (n+1)3 matrix of the column of colors in YIQ color space.

  •  is the maximal eigenvalue of the covariance matrix of A.

  • Select first column with  under a threshold.

  • Paint with the median color of that column.


Important details
Important Details

  • Local “smoothing”: They prefer color consistent columns whose 3D position is spatially consistent with that of neighboring pixels.

  • Uniform regions: They flag used pixels in source images to prevent their repeated matching.

  • Pixel scanning order: They evaluate for physical points closer to ground plane first; then farther.

  • Ground subtraction: Except for piercing point, remove ground plane from coloring stack


Practical simplification
Practical Simplification

  • Cameras are coplanar

  • Real epipoles lie on a line in R

  • Rectifying R into the “nadir view” makes the line of epipoles go to infinity

  • The Mi line-to-line transformations become affine – simple linear stretching of the lines


Synthetic scene
Synthetic Scene

  • Extreme change in viewpoint

  • Objects seen through gate, while source images have only floor seen through gate



Folder scene
Folder Scene

  • Off-the-shelf digital camera, constant-height tripod

  • Triangle occludes distant folder

  • 11 images used for (e)


Puppet scene
Puppet Scene

  • Green smear on lower left of (e): “This floor region was not visible in any of the input images.”

  • 9 images used for (e)


Assessment
Assessment

  • Interesting use of projective, epipolar geometry

  • Needs only weak calibration

  • Needs failure analysis

  • How to define Hsyn?

  • Explicit notion of visibility could help

  • Manual selection among source images?

  • Observation: no occlusions in source imagery – hmm…


ad