Inverse depth parameterization for monocular slam vision seminar
This presentation is the property of its rightful owner.
Sponsored Links
1 / 29

Inverse Depth Parameterization for Monocular SLAM Vision Seminar PowerPoint PPT Presentation


  • 145 Views
  • Uploaded on
  • Presentation posted in: General

Inverse Depth Parameterization for Monocular SLAM Vision Seminar. 2009. 3. 25 (Wed) Young Ki Baik. Computer Vision Lab. References. Inverse Depth parameterization for Monocular SLAM J. Civera, A. J. Davison, J. M. M. Montiel (IEEE Trans. On Robotics 2008)

Download Presentation

Inverse Depth Parameterization for Monocular SLAM Vision Seminar

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Inverse depth parameterization for monocular slam vision seminar

Inverse Depth Parameterization for Monocular SLAMVision Seminar

2009. 3. 25 (Wed)

Young Ki Baik

Computer Vision Lab.


References

References

  • Inverse Depth parameterization for Monocular SLAM

    • J. Civera, A. J. Davison, J. M. M. Montiel (IEEE Trans. On Robotics 2008)

  • Inverse Depth to Depth Convsrsion for Monocualr SLAM

    • J. Civera, A. J. Davison, J. M. M. Montiel (ICRA 2007)

  • Unified Inverse Depth Parameterization for Monocular SLAM

    • J. M. M. Montiel, J. Civera, A. J. Davison (RSS 2006)

Computer Vision Lab.


Outline

Outline

  • What is SLAM?

  • What is Visual SLAM?

  • Overall process of SLAM

  • An issue of the Map

  • Inverse depth parameterization

  • Conclusion

Computer Vision Lab.


What is slam

What is SLAM?

  • SLAM: Simultaneous Localization and Mapping

    is a technique used by robots and autonomous vehicles to build up a map within an unknown environment while at the same time keeping track of their current position.

Where am I ?

Observation

Map building

Computer Vision Lab.


What is slam1

What is SLAM?

  • SLAM : Simultaneous Localization and Mapping

    basically uses some statistical techniques based on recursive Bayesian estimation such as Kalman filters and particle filters (aka. Monte Carlo methods).

^$#!@&%?

Computer Vision Lab.


What is visual slam

What is Visual SLAM?

  • SLAM : Simultaneous Localization and Mapping

    can use many different types of sensor to acquire observation data used in building the map such as laser rangefinders, sonar sensors and cameras.

  • Visual SLAM

  • - is to use cameras as a sensor.

Computer Vision Lab.


Why visual slam

Why Visual SLAM?

  • Vision data can inform us more meaningful information (such as color, texture, shape…) relative to other sensors.

Computer Vision Lab.


Overall process of visual slam

Overall process of Visual SLAM

Initialization

Prediction

Measurement

Map

management

Update

Computer Vision Lab.


Visual slam

Visual SLAM

DEMO

Mono-slam

Computer Vision Lab.


Problems

Problems

  • Proposal

  • Data association

  • Filter

  • Map management

  • Real-time

Computer Vision Lab.


What is the map of visual slam

What is the map of visual SLAM?

  • Map (Landmarks:LM)

  • Robot (or Camera)

Li= (yi, Yi)T + Patch

C= (r, q)T

y : 3D position of LM

r : 3D position

Y : 3x3 covariance

matrix of LM

q : 3D orientation

Computer Vision Lab.


What is the map of visual slam1

What is the map of visual SLAM?

  • Robot and maps

L2= (y2, Y2)T

C6D= (r, q)T

L1= (y1, Y1)T

Computer Vision Lab.


How can we obtain initial lm info

How can we obtain initial LM info.?

  • Binocular camera case

    3D landmarks are directly reconstructed from stereo images since binocular camera retains parallax.

C6D= (r, q)T

L= (y, Y)T

Parallax: The measured angle between the captured rays from different view points

Computer Vision Lab.


How can we obtain initial lm info1

How can we obtain initial LM info.?

  • Monocular camera case

    Is it possible that 3D landmarks are directly reconstructed by monocular camera?

?

C6D= (r, q)T

L= (y, Y)T

Computer Vision Lab.


How can we obtain initial lm info2

How can we obtain initial LM info.?

  • Delayed Initialization of LM location

    • A batch update [Dean 2000, Bailey 2003]

- Large base line will assure high parallax !!!

  • We can’t always expect large base line !!!→ Problem is distance from camera to LM.

Computer Vision Lab.


How can we obtain initial lm info3

How can we obtain initial LM info.?

  • Delayed Initialization of LM location

    • Gaussian Sum Filter [Kwok 2005, Sola 2005]

- Initializing predefined multiple hypothesis at various depths !!!

  • Pruning those not re-observed in subsequent images !!!

  • → It can cover the predefined depth only.

  • → can not cover the distant depth.

  • → can not cover low parallax cases.

Computer Vision Lab.


How can we obtain initial lm info4

How can we obtain initial LM info.?

  • Undelayed Initialization of LM

    • Inverse Depth Parameterization [Montiel 2006~2008]

- Initializing a ray !!!

  • Updating uncertainty by inverse depth coding !!!

  • → It can cover the infinity depth.

Computer Vision Lab.


How can we obtain initial lm info5

How can we obtain initial LM info.?

  • Undelayed Initialization of LM

    • Inverse Depth Parameterization [Montiel 2006~2008]

      • Contribution

  • * Initializing LM immidiately !!!

  • * Covering the infinity depth of LM !!!

  • * Covering the Low parallax case !!!

Computer Vision Lab.


Inverse depth parameterization

Inverse Depth Parameterization

  • Overview

LXYZ= (X, Y, Z)T

= (x,y,z)T + 1/ρ*m(θ,ф)

1/ρ = d

α

m

(x,y,z)T

C

C6D= (rwc, qwc)T

rwc

W

Computer Vision Lab.


Inverse depth parameterization1

Inverse Depth Parameterization

  • Definition (Point parameterization)

    • X-Y-Z Point Parameterization

    • Inverse Depth Point Parameterization

LXYZ= (X, Y, Z)T

= (x,y,z)T + 1/ρ*m(θ, ф)

m( cosфsinθ, -sinф, cosфsinθ)

LIDP = (x, y, z, θ, ф, ρ)T

Computer Vision Lab.


Inverse depth parameterization2

Inverse Depth Parameterization

  • Definition (Measurement Equation)

    • X-Y-Z system

    • Inverse Depth system

LXYZ= (X, Y, Z)T

= (x, y, z)T + 1/ρ*m(θ, ф)

hC= hXYZ = Rcw [ (X, Y, Z)T – rwc]

(u, v)T = (u0 – fx hxC / hzC , v0 - fy hyC / hzC )

hC= hρ = Rcw [ρ((x, y, z)T – rwc) + m(θ, ф)]

It can be safely used even for points at infinity (ρ=0) !!!

Computer Vision Lab.


Inverse depth parameterization3

Inverse Depth Parameterization

  • Initialization of LM using IDP

LIDP = (x, y, z, θ, ф, ρ)T

C= (r, q)T

LIDP = (r, θ, ф, ρ)T

Computer Vision Lab.


Inverse depth parameterization4

Inverse Depth Parameterization

  • Initialization of LM using IDP

LIDP = (x, y, z, θ, ф, ρ)T

(u’, v’, 1)T

C= (r, q)T

Hw = Rwc(u’, v’, 1)T

θ = arctan (hxw, hzw)T

ф= arctan (-hyw, sqrt(hxw ^2+hzw^2) )T

LIDP = (r, θ, ф, ρ)T

ρ = 0.1 (or arbitrary constant value)

Computer Vision Lab.


Inverse depth parameterization5

Inverse Depth Parameterization

  • Initialization of LM using IDP

    Updating state covariance matrix

State covariance

Measurement covariance

Inverse depth variance

Computer Vision Lab.


Inverse depth parameterization6

Inverse Depth Parameterization

  • Switching from Inverse depth to XYZ

LIDP

LXYZ

L= (X, Y, Z)T

= (x,y,z)T + 1/ρ*m(θ,ф)

PIDP

PXYZ

Computer Vision Lab.


Inverse depth parameterization7

Inverse Depth Parameterization

  • Demo

    • Monocular SLAM based on EKF

Computer Vision Lab.


Inverse depth parameterization8

Inverse Depth Parameterization

  • Demo

    • Monocular SLAM based on PF with OIF

Computer Vision Lab.


Conclusion

Conclusion

  • Pros.

    • IDP is robust for monocular SLAM.

      • Non-delayed LM initialization

      • Processing for any point in the scene, close or distant, or even at “infinity”

      • Dealing simultaneously with low and high parallax case

  • Cons.

    • IDP requires 6-D state vector

      → This doubles the map state vector size

Computer Vision Lab.


Inverse depth parameterization for monocular slam vision seminar

Q & A

Computer Vision Lab.


  • Login