Next Generation 4D Distributed Modeling and Visualization
This presentation is the property of its rightful owner.
Sponsored Links
1 / 57

Automated 3D Model Construction for Urban Environments PowerPoint PPT Presentation


  • 40 Views
  • Uploaded on
  • Presentation posted in: General

Next Generation 4D Distributed Modeling and Visualization. Automated 3D Model Construction for Urban Environments. Christian Frueh John Flynn Avideh Zakhor. University of California, Berkeley. June 13, 2002. Presentation Overview. Introduction Ground based modeling Mesh processing

Download Presentation

Automated 3D Model Construction for Urban Environments

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Automated 3d model construction for urban environments

Next Generation 4D Distributed Modeling and Visualization

Automated 3D Model Construction for Urban Environments

Christian Frueh John Flynn Avideh Zakhor

University of California, Berkeley

June 13, 2002


Presentation overview

Presentation Overview

  • Introduction

  • Ground based modeling

    • Mesh processing

  • Airborne modeling

    • Aerial photos

    • Airborne laser scans

  • 3D Model Fusion

  • Rendering

  • Conclusion and Future Work


Introduction

Introduction

Goal: Generate 3D model of a city for virtual walk/drive/fly-thrus and simulations

  • Fast

  • Automated

  • Photorealistic

needed:

For Fly-Thru:

For Walk/Drive-Thru:

  • 3D Model of terrain and buildings tops & sides

  • coarse resolution

  • 3D model of street scenery & building façades

  • highly detailed


Introduction1

Airborne Modeling

Ground Based Modeling

  • Laser scans/images from plane

  • Laser scans & images from acquisition vehicle

Fusion

Complete 3D City Model

Introduction

3D Model of terrain and building tops

3D model of building façades


Airborne modeling

Airborne Modeling

Acquisition of terrain shape and top-view building geometry

Goal:

  • Available Data:

  • Aerial Photos

  • Airborne laser scans

Texture: from aerial photos

Geometry: 2 approaches:

I) stereo matching of photos

II) airborne laser scans


Airborne modeling1

Airborne Modeling

Approach I : Stereo Matching

(last year)

Stereo photo pairs from city/urban areas, ~ 60% overlap

Semi-Automatic

Manual:

Automated:

  • Segmentation

  • Camera parameter computation,

  • Matching,

  • Distortion reduction,

  • Model generation


Stereo matching

Stereo Matching

Stereo pair from downtown Berkeley and the estimated disparity after removing perspective distortions


Stereo matching results

Stereo Matching Results

Downtown Oakland


Airborne modeling2

Airborne Modeling

Approach II: Airborne Laser Scans

Scanning city from plane

  • Resolution 1 scan point/m2

  • Berkeley: 40 Million scan points

point cloud


Airborne laser scans

Airborne Laser Scans

  • Re-sampling point cloud

  • Sorting into grid

  • Filling holes

Map-like height field

usable for:

  • Monte Carlo Localization

  • Mesh Generation


Textured mesh generation

Textured Mesh Generation

1. Connecting grid vertices to mesh

2. Applying Q-slim simplification

3. Texture mapping:

  • Semi-automatic

  • Manual selection of few correspondence points: 10 mins/entire Berkeley

  • Automated camera pose estimation

  • Automated computation of texture for mesh


Airborne model

Airborne Model

East Berkeley campus with campanile


Airborne model1

Airborne Model

Downtown Berkeley

http://www-video.eecs.berkeley.edu/~frueh/3d/airborne/


Ground based modeling

Ground Based Modeling

buildings

2D laser

v

truck

z

u

y

x

Acquisition of highly detailed 3D building façade models

Goal:

  • Scanning setup

  • vertical 2D laser scanner for geometry capture

  • horizontal scanner for pose estimation

  • Acquisition vehicle

  • Truck with rack:

  • 2 fast 2D laser scanners

  • digital camera


Scan matching initial path computation

(ui, vi, i)

(ui-1, vi-1, i-1)

(u2, v2, 2)

(u1, v1, 1)

Scan Matching & Initial Path Computation

Horizontal laser scans:

  • Continuously captured during vehicle motion

  • Overlap

Relative position estimation by scan-to-scan matching

Translation (u,v)

Rotation 

Adding relative steps (ui, vi, i)

t = t0



t = t1

(u, v)

path (xi,yi,i)

Scan matching

3 DOF pose (x, y, yaw)


6 dof pose estimation from images

6 DOF Pose Estimation From Images

  • Scan matching cannot estimate vertical motion

    • Small bumps and rolls

    • Slopes in hill areas

  • Full 6 DOF pose of the vehicle is important; affects:

    • Future processing of the 3D and intensity data

    • Texture mapping of the resulting 3D models

  • Extend initial 3 DOF pose by deriving missing 3 DOF (z, pitch, roll) from images


6 dof pose estimation from images1

6 DOF Pose Estimation From Images

Central idea: photo-consistency

  • Each 3D scan point can be projected into images using initial 3 DOF pose

  • If pose estimate is correct, point should appear the same in all images

  • Use discrepancies in projected position of 3D points within multiple images to solve for the full pose


6 dof pose estimation algorithm

6 DOF Pose Estimation – Algorithm

  • 3DOF of laser as initial estimate

  • Project scan points into both images

  • If not consistent, use image correlation to find correct projection

  • Ransac used for robustness


6 dof pose estimation results

6 DOF Pose Estimation – Results

with 3 DOF pose

with 6 DOF pose


6 dof pose estimation results1

6 DOF Pose Estimation – Results


Monte carlo localization 1

Monte Carlo Localization (1)

Previously: Global 3 DOF pose correction using aerial photography

a) path before MCL correction

b) path after MCL correction

After correction, points fit to edges of aerial image


Monte carlo localization 2

Monte Carlo Localization (2)

Extend MCL to work with airborne laser data and 6 DOF pose

Now:

No perspective shifts of building tops, no shadow lines

  • Fewer particles necessary, increased computation speed

  • Significantly higher accuracy near high buildings and tree areas

Use terrain shape to estimate z coordinate of truck

  • Correct additional DOF for vehicle pose (z, pitch, roll)

  • Modeling not restricted to flat areas


Monte carlo localization 3

Monte Carlo Localization (3)

Track global 3D position of vehicle to correct relative 6 DOF motion estimates

Resulting corrected path overlaid with airborne laser height field


Path segmentation

  • Segment path into quasi-linear pieces

  • Cut path at curves and empty areas

  • Remove redundant segments

Path Segmentation

24 mins, 6769 meters

vertical scans:107,082

scan points: ~ 15 million

Too large to process as one block!


Path segmentation1

Path Segmentation

Resulting path segments overlaid with edges of airborne laser height map


Simple mesh generation

Simple Mesh Generation


Simple mesh generation1

Side views look “noisy”

Remove foreground: extract facades

Simple Mesh Generation

Triangulate

Point cloud

Mesh

  • Problem:

  • Partially captured foreground objects

  • erroneous scan points due to glass reflection


Fa ade extraction and processing 1

split depth

main depth

local minimum

2. Histogram analysis over vertical scans

split depth

main depth

depth value sn,υfor a scan point Pn,υ

depth

scanner

scan nr

ground points

depth

Façade Extraction and Processing (1)

1. Transform path segment into depth image


Fa ade extraction and processing 2

Façade Extraction and Processing (2)

3. Separate depth image into 2 layers:

foreground=trees, cars etc.

background=building facades


Fa ade extraction and processing 3

Façade Extraction and Processing (3)

4. Process background layer:

  • Detect and remove invalid scan points

  • Fill areas occluded by foreground objects by extending geometry from boundaries

    • Horizontal, vertical, planar interpolation, RANSAC

  • Apply segmentation

  • Remove isolated segments

  • Fill remaining holes in large segments

  • Final result: “clean” background layer


Fa ade extraction examples 1

Façade Extraction – Examples (1)

with processing

without processing


Fa ade extraction examples 2

Façade Extraction – Examples (2)

without processing

with processing


Fa ade extraction examples 3

Façade Extraction – Examples (3)

without processing

with processing


Facade processing

Facade Processing


Foreground removal

Foreground Removal


Mesh generation

Mesh Generation

Downtown Berkeley


Automatic texture mapping 1

Automatic Texture Mapping (1)

Camera calibrated and synchronized with laser scanners

Transformation matrix between camera image and laser scan vertices can be computed

1. Project geometry into images

2. Mark occluding foreground objects in image

3. For each background triangle:

Search pictures in which triangle is not occluded, and texture with corresponding picture area


Automatic texture mapping 2

Typical texture reduction: factor 8..12

Automatic Texture Mapping(2)

Efficient representation: texture atlas

Copy texture of all triangles into “mosaic” image


Automatic texture mapping 3

Texture synthesis: preliminary

  • Mark holes corresponding to non-textured triangles in the atlas

  • Search the image for areas matching the hole boundaries

  • Fill the hole by copying missing pixels from these image

Automatic Texture Mapping (3)

Large foreground objects: Some of the filled-in triangles are not visible in any image!

“texture holes” in the atlas


Automatic texture mapping 4

Texture holes filled

Automatic Texture Mapping (4)

Texture holes marked


Automatic texture mapping 5

Automatic Texture Mapping(5)


Ground based modeling results

Ground Based Modeling - Results

Façade models of downtown Berkeley


Ground based modeling results1

Ground Based Modeling - Results

Façade models of downtown Berkeley


Model fusion

Model Fusion

Fusion of ground based and airborne model to one single model

Goal:

Façade model

Airborne model

Model Fusion:

  • Registration of models

  • Combining the registered meshes


Registration of models

Which model to use where?

Registration of Models

Models are already registered with each via Monte-Carlo-Localization !


Preparing ground based models

Preparing Ground Based Models

Intersect path segments with each other Remove degenerated, redundant triangles in overlapping areas

original mesh

redundant triangles removed


Preparing airborne model

Preparing Airborne Model

Ground based model has 5-10 times higher resolution

  • Remove facades in airborne model where ground based geometry is available

  • Add ground based façades

  • Fill remaining gaps with a “blend mesh” to hide model transitions


Preparing airborne model1

Preparing Airborne Model

Initial airborne model


Preparing airborne model2

Preparing Airborne Model

Remove facades where ground based geometry is available


Combining models

Combining Models

Add ground based façade models


Combining models1

Combining Models

Fill remaining gaps with a “blend mesh” to hide model transitions


Model fusion results

Model Fusion - Results


Rendering

Rendering

Ground based models:

  • Up to 270,000 triangles, 20 MB texture per path segment

  • 4.28 million triangles, 348 MB texture for 4 downtown blocks

  • Difficult to render interactively!

  • Subdivide model and create multiple level-of-details (LOD)

  • Generate scene graph, decide which LOD to render when


Multiple lods for fa ade meshes

Multiple LODs for façade meshes

Highest LOD

  • Qslim mesh simplification

  • Texture subsampling

Lower LOD

  • Geometry: 10%

  • Texture: 25%

of original mesh


Fa ade model subdivision for rendering

Façade Model Subdivision for Rendering

Subdivide 2 highest LODs of façade meshes along cut planes

Sub-scene

LOD 0

Submesh

LOD 1

Path segment

LOD 0

Submesh

LOD 1

Global scene

LOD 0

LOD 2

Submesh

LOD 1


Interactive rendering

Interactive Rendering

Downtown blocks: Interactive rendering with web-based browser!


Future work

Future Work

  • Resolution enhancement and post-processing of LIDAR data

  • Devise new data acquisition system and algorithms to capture

    • both sides of street simultaneously

    • texture for upper parts of tall buildings

  • Include foreground objects in model

  • Add temporal component to dynamically update models

  • Compact representation

  • Interactive rendering


  • Login