Marginal particle and multirobot slam slam simultaneous localization and mapping
Download
1 / 38

Marginal Partical Filters and their application to multi ... - PowerPoint PPT Presentation


  • 255 Views
  • Uploaded on

Marginal Particle and Multirobot Slam: SLAM=‘SIMULTANEOUS LOCALIZATION AND MAPPING’ By Marc Sobel (Includes references to Brian Clipp Comp 790-072 Robotics) The SLAM Problem Given Robot controls Nearby measurements Estimate Robot state (position, orientation) Map of world features

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Marginal Partical Filters and their application to multi ...' - albert


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Marginal particle and multirobot slam slam simultaneous localization and mapping l.jpg

Marginal Particle and Multirobot Slam: SLAM=‘SIMULTANEOUS LOCALIZATION AND MAPPING’

By Marc Sobel

(Includes references to Brian Clipp

Comp 790-072 Robotics)


The slam problem l.jpg
The SLAM Problem LOCALIZATION AND MAPPING’

  • Given

    • Robot controls

    • Nearby measurements

  • Estimate

    • Robot state (position, orientation)

    • Map of world features


Slam applications l.jpg

Indoors LOCALIZATION AND MAPPING’

Undersea

Underground

Space

SLAM Applications

Images – Probabilistic Robotics


Outline l.jpg
Outline LOCALIZATION AND MAPPING’

  • Sensors

  • SLAM

  • Full vs. Online SLAM

  • Marginal Slam

  • Multirobot marginal slam

  • Example Algorithms

    • Extended Kalman Filter (EKF) SLAM

    • FastSLAM (particle filter)


Types of sensors l.jpg
Types of Sensors LOCALIZATION AND MAPPING’

  • Odometry

  • Laser Ranging and Detection (LIDAR)

  • Acoustic (sonar, ultrasonic)

  • Radar

  • Vision (monocular, stereo etc.)

  • GPS

  • Gyroscopes, Accelerometers (Inertial Navigation)

  • Etc.


Sensor characteristics l.jpg
Sensor Characteristics LOCALIZATION AND MAPPING’

  • Noise

  • Dimensionality of Output

    • LIDAR- 3D point

    • Vision- Bearing only (2D ray in space)

  • Range

  • Frame of Reference

    • Most in robot frame (Vision, LIDAR, etc.)

    • GPS earth centered coordinate frame

    • Accelerometers/Gyros in inertial coordinate frame


A probabilistic approach l.jpg
A Probabilistic Approach LOCALIZATION AND MAPPING’

  • Notation:


Full vs online classical slam l.jpg
Full vs. Online classical SLAM LOCALIZATION AND MAPPING’

  • Full SLAM calculates the robot pose over all time up to time t given the signal and odometry:

  • Online SLAM calculates the robot pose for the current time t


Full vs online slam l.jpg
Full vs. Online SLAM LOCALIZATION AND MAPPING’

Full SLAM

Online SLAM


Classical fast and ekf slam l.jpg
Classical Fast and EKF Slam LOCALIZATION AND MAPPING’

  • Robot Environment:

  • (1) N distances: mt ={d(xt,L1),….,d(xt,LN)};

    m measures distances from landmarks at

    time t.

  • (2) Robot pose at time t: xt.

  • (3) (Scan) Measurements at time t: zt

  • Goal: Determine the poses x1:T

    given scans z1:t,odometry u1:T, and map

    measurements m.


Ekf slam extended kalman filter l.jpg
EKF SLAM (Extended Kalman Filter) LOCALIZATION AND MAPPING’

  • As the state vector moves, the robot pose moves according to the motion function, g(ut,xt). This can be linearized into a Kalman Filter via:

  • The Jacobian J depends on translational and rotational velocity. This allows us to assume that the motion and hence distances are Gaussian. We can calculate the mean μ and covariance matrix Σ for the particle xt at time t.


Outline of ekf slam l.jpg
Outline of EKF SLAM LOCALIZATION AND MAPPING’

  • By what preceded we assume that the map vectors m (measuring distance from landmarks) are independent multivariate normal: Hence we now have:


Conditional independence l.jpg
Conditional Independence LOCALIZATION AND MAPPING’

  • For constructing the weights associated with the classical fast slam algorithm, under moderate assumptions, we get:

  • We use the notation,

  • And calculate that:


Problems with ekf slam l.jpg
Problems With EKF SLAM LOCALIZATION AND MAPPING’

  • Uses uni-modal Gaussians to model non-Gaussian probability density function


Particle filters without ekf l.jpg
Particle Filters (without EKF) LOCALIZATION AND MAPPING’

  • The use of EKF depends on the odometry (u’s) and motion model (g’s) assumptions in a very nonrobust way and fails to allow for multimodality in the motion model. In place of this, we can use particle filters without assuming a motion model by modeling the particles without reference to the parameters.


Particle filters an alternative l.jpg
Particle Filters (an alternative) LOCALIZATION AND MAPPING’

  • Represent probability distribution as a set of discrete particles which occupy the state space


Particle filters l.jpg
Particle Filters LOCALIZATION AND MAPPING’

  • For constructing the weights associated with the classical fast slam algorithm, under moderate assumptions, we get: (for x’s simulated by q)


Resampling l.jpg
Resampling LOCALIZATION AND MAPPING’

  • Assign each particle a weight depending on how well its estimate of the state agrees with the measurements

  • Randomly draw particles from previous distribution based on weights creating a new distribution


Particle filter update cycle l.jpg
Particle Filter Update Cycle LOCALIZATION AND MAPPING’

  • Generate new particle distribution

  • For each particle

    • Compare particle’s prediction of measurements with actual measurements

    • Particles whose predictions match the measurements are given a high weight

  • Resample particles based on weight


Particle filter advantages l.jpg
Particle Filter Advantages LOCALIZATION AND MAPPING’

  • Can represent multi-modal distributions


Problems with particle filters l.jpg
Problems with Particle Filters LOCALIZATION AND MAPPING’

  • Degeneracy: As time evolves particles increase in dimensionality. Since there is error at each time point, this evolution typically leads to vanishingly small interparticle (relative to intraparticle) variation.

  • We frequently require estimates of the ‘marginal’ rather than ‘conditional’ particle distribution.

  • Particle Filters do not provide good methods for estimating particle features.


Marginal versus nonmarginal particle filters l.jpg
Marginal versus nonmarginal Particle Filters LOCALIZATION AND MAPPING’

  • Marginal particle filters attempt to update the X’s using their marginal (posterior) distribution rather than their conditional (posterior) distribution. The update weights take the general form,


Marginal particle update l.jpg
Marginal Particle update LOCALIZATION AND MAPPING’

  • We want to update by using the old weights rather than conditioning on the old particles.


Marginal particle filters l.jpg
Marginal Particle Filters LOCALIZATION AND MAPPING’

  • We specify the proposal distribution ‘q’ via:


Marginal particle algorithm l.jpg
Marginal Particle Algorithm LOCALIZATION AND MAPPING’

  • (1)

  • (2) Calculate the importance weights:


Updating map features in the marginal model l.jpg
Updating Map features in the marginal model LOCALIZATION AND MAPPING’

  • Up to now, we haven’t assumed any map features. Let θ={θt} denote the e.g., distances of the robot at time t from the given landmarks. We then write

  • for the probability associated with scan Zt given the position x1:t.

  • We’d like to update θ. This should be based, not on the gradient , but rather on the gradient, .


Taking the right derivative l.jpg
Taking the ‘right’ derivative LOCALIZATION AND MAPPING’

  • The gradient, is highly non-robust; we are essentially taking derivatives of noise.

  • By contrast, the gradient, is robust and represents the ‘right’ derivative.


Estimating the gradient of a map l.jpg
Estimating the Gradient of a Map LOCALIZATION AND MAPPING’

  • We have that,


Simplification l.jpg
Simplification LOCALIZATION AND MAPPING’

  • We can then show for the first term that:


Simplification ii l.jpg
Simplification II LOCALIZATION AND MAPPING’

  • For the second term, we convert into a discrete sum by defining ‘derivative weights’

  • And combining them with the standard weights.


Estimating the gradient l.jpg
Estimating the Gradient LOCALIZATION AND MAPPING’

  • We can further write that:


Gradient continued l.jpg
Gradient (continued) LOCALIZATION AND MAPPING’

  • We can therefore update the gradient weights via:


Parameter updates l.jpg
Parameter Updates LOCALIZATION AND MAPPING’

  • We update θ by:


Normalization l.jpg
Normalization LOCALIZATION AND MAPPING’

  • The β’s are normalized differently from the w’s. In effect we put:

  • And then compute that:


Weight updates l.jpg
Weight Updates LOCALIZATION AND MAPPING’


The bayesian viewpoint l.jpg
The Bayesian Viewpoint LOCALIZATION AND MAPPING’

  • Retain a posterior sample of θ at time t-1.

  • Call this (i=1,…,I)

  • At time t, update this sample:


Multi robot models l.jpg
Multi Robot Models LOCALIZATION AND MAPPING’

  • Write for the poses and scan statistics for the r robots.

  • At each time point the needed weights have r indices:

  • We also need to update the derivative weights – the derivative is now a matrix derivative.


Multi robot slam l.jpg
Multi-Robot SLAM LOCALIZATION AND MAPPING’

  • The parameter is now a matrix (with time being the row values and robot index being the column. Updates depend on derivatives with respect to each timepoint and with respect to each robot.


ad