Three Dimensional Model Construction for Visualization
Download
1 / 46

Three Dimensional Model Construction for Visualization - PowerPoint PPT Presentation


  • 78 Views
  • Uploaded on

Three Dimensional Model Construction for Visualization. Avideh Zakhor. Video and Image Processing Lab University of California at Berkeley [email protected] Outline. Goals and objectives Previous work by PI Directions for future work. Goals and Objectives.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Three Dimensional Model Construction for Visualization' - theta


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Three Dimensional Model Construction for Visualization

Avideh Zakhor

Video and Image Processing Lab

University of California at Berkeley

[email protected]


Outline
Outline

  • Goals and objectives

  • Previous work by PI

  • Directions for future work


Goals and objectives
Goals and Objectives

  • Develop a framework for fast, automatic and accurate 3D model construction for objects, scenes, rooms, buildings (interior and exterior), urban areas, and cities.

  • Models must be easy to compute, compact to represent and suitable for high quality view synthesis and visualization

  • Applications: Virtual or augmented reality fly-throughs.


Previous Work on Scene Modeling

  • Full/Assisted 3-D ModelingKanade et al.; Koch et al.; Becker & Bove; Debevec et al.; Faugeras et al.; Malik & Yu.

  • Mosaics and PanoramasSzeliski & Kang; McMillan & Bishop; Shum & Szeliski

  • Layered/LDI RepresentationsWang & Adelson; Sawhney & Ayer; Weiss; Baker et al.

  • View Interpolation/IBR/Light FieldsChen & Williams; Chang & Zakhor; Laveau & Faugeras; Seitz & Dyer; Levoy & Hanrahan


Previous work on building models
Previous Work on Building Models

  • Nevatia (USC): multi-sensor integration

  • Teller (MIT): spherical mosaics on a wheelchair sized rover, known 6DOF

  • Van Gool (Belgium): roof detection from aerial photographs

  • Peter Allen (Columbia): images and laser range finders; view/sensor planning.

  • Faugeras (INRIA)


Previous work on city modeling
Previous Work on City Modeling

  • Planet 9:

    • Combines ground photographs with existing city maps manually.

  • UCLA Urban Simulation Team:

    • Uses mutligen to create models from aerial photographs, together with ground video for texture mapping.

  • Bath and London models by Univ. of Bath.

    • Combines aerial photgraphs with existing maps.

  • All approaches are slow and labor intensive.


Work at vip lab at ucb
Work at VIP lab at UCB

Scene modeling and reconstruction.


Multi valued representation mvr
Multi-Valued Representation: MVR

  • Level k has k occluding surfaces

  • Form multivalued array of depth and intensity



Imaging geometry 1
Imaging geometry (1)

  • Planar translation


Imaging geometry 2
Imaging Geometry (2)

  • Circular/orbital motion


Dense depth estimation
Dense Depth Estimation

  • Estimate camera motion

  • Compute depth maps to build MVRs

    • Low-contrast regions problematic for dense depth estimation.

    • Enforce spatial coherence to achieve realistic, high quality visualization.


Block diagram for dense depth estimation
Block Diagram for Dense Depth Estimation

  • Planar approximation of depth for low contrast regions.


Oroginal sequences
Oroginal Sequences

“Mug” sequence

(13 frames)

“Teabox” sequence

(102 frames)


Low contrast regions
Low-Contrast Regions

  • Complete tracking

Mug sequence

Tea-box sequence


Multiframe depth estimation
Multiframe Depth Estimation

Apply iterative estimation algorithm to enforce piecewise smoothness, without smoothing over depth discontinuities.


Multiframe depth estimation1
Multiframe Depth Estimation

Mug Tea-box

Multiframe Stereo

+ Low-Contrast Processing

+ Piecewise Smoothing

Multiframe Stereo

+ Low-Contrast Processing

+ Piecewise Smoothing


Multivalued representation
Multivalued Representation

  • Project depths to reference coordinates


Results 1

Multivalued representation for frame 4

(Level 0)

Results (1)

  • Mug sequence


Results

Multivalued representation for frame 4

(Level 1)

Results

  • Mug sequence


Results1

Multivalued representation for frame 4

(Combining Levels 0 and 1)

Results

  • Mug sequence


Results2
Results

  • Mug sequence

Reconstructed sequence

Arbitrary flythrough


Results 2
Results (2)

  • Teabox sequence

Multivalued representation for frame 22

(Intensity, Level 0)


Results3
Results

  • Teabox sequence

Multivalued representation for frame 22

(Depth, Level 0)


Results4
Results

  • Teabox sequence

Multivalued representation for frame 22

(Intensity, Level 1)


Results5
Results

  • Teabox sequence

Multivalued representation for frame 22

(Depth, Level 1)


Results6
Results

  • Teabox sequence

Multivalued representation for frame 22

(Intensity, combining Levels 0 and 1)


Results7
Results

  • Teabox sequence

Multivalued representation for frame 22

(Depth, combining Levels 0 and 1)


Results8
Results

  • Teabox sequence

Multivalued representation for frame 86

(Intensity, Level 0)


Results9
Results

  • Teabox sequence

Multivalued representation for frame 86

(Depth, Level 0)


Results10
Results

  • Teabox sequence

Multivalued representation for frame 86

(Intensity, Level 1)


Results11
Results

  • Teabox sequence

Multivalued representation for frame 86

(Depth, Level 1)


Results12
Results

  • Teabox sequence

Multivalued representation for frame 86

(Intensity, combining Levels 0 and 1)


Results13
Results

  • Teabox sequence

Multivalued representation for frame 86

(Depth, combining Levels 0 and 1)


Multiple mvrs
Multiple MVRs

  • Perform view interpolation w/many MVRs


Results multiple mvrs
Results: multiple MVRs

  • Teabox sequence

Reconstructed sequence

from MVR86

Reconstruct sequence

from MVR22


Results multiple mvrs1
Results: Multiple MVRs

Reconstructed sequence

Arbitrary flyaround


Extensions
Extensions

  • Complex scenes with many “levels” are difficult to model with MVR; e.g. trees, leaves, etc

  • Difficult to ensure realistic visualization from all angles; Need to plan capture process carefully.

  • Tradeoff between CG polygon modeling and IBR;

    • Use both in real visualization databases.

    • Build polygon models from MVR.


Issues for model construction
Issues for model construction

  • Choice of geometry for obtaining data

  • Choice of imaging technology.

  • Choice of representation.

  • Choice of models.

  • Dealing with time varying scenes.


Extensions1
Extensions:

  • So far, addressed “outside in” problem:

    • Camera looked inward to “scan” the object.

  • Future work will focus on the “Inside out” problem:

    • Modeling a room, office.

    • Modeling exterior or interior of a building

    • Modeling an urban environment e.g. a city


Strategy
Strategy

  • Use:

    • Range sensors, position sensors (GPS), Gyros(orientation), omni camera, video.

    • Existing datasets: 3D CAD models, digital elevation maps (DEM), DTED, city maps, architectural drawings: apriori information


Modeling interior of buildings
Modeling interior of buildings

  • Leverage existing work in the computer graphics group at UCB:

    • 3D model of Soda hall available from the “soda walkthrough” project.

    • 3D model built out of architectural drawings

    • Use additional video, and laser range finder input to

      • Enhance the details of the 3D model: furniture, etc

      • Add texture maps for photo-realistic walk-throughs.


City modeling
City Modeling

  • Develop a framework for modeling parts of city of San Francisco:

    • Use aerial photograph as provided by Space Imaging Corp; resolution 1 ft.

    • Use digitized city maps

    • Use ground data collection vehicle to collect range and intensity video from a panoramic camera, annotated with 6 DOF parameters.

    • Derive data fusion algorithms to process the above in speedy, automated and accurate fashion.


Requirements
Requirements

  • Automation (little or no interaction needed from human operators)

  • Speed: must scale with large areas and large data sets.

  • Accuracy

  • Robustness to location of data collection.

  • Ease of data collection.

  • Representation suitable to hierarchical visualization databases.


Relationship to others
Relationship to others

  • USC: accurate tracking and registration algorithms needed for model construction.

  • Syracuse: uncertainty processing, and data fusion for model construction.

  • G. Tech: How to combine CG polygonal model building with IBR models in vis. database? How can vis. databases deal with photo-realistic rendering?


Conclusions
Conclusions

  • Fast, accurate and automatic model construction is essential to mobile augmented reality systems.

  • Our goal is to provide photo-realistic rendering of objects, scenes, buildings, and cities, to enable, visualization, navigation and interaction.


ad