1 / 24

Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates

Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates. Edwin Olson eolson@mit.edu John Leonard, Seth Teller jleonard@mit.edu, teller@csail.mit.edu http://rvsn.csail.mit.edu/graphoptim. Where are we going?. +. =. Feature. Pose. Constraint. Problem summary.

zeus-kelley
Download Presentation

Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates Edwin Olson eolson@mit.edu John Leonard, Seth Teller jleonard@mit.edu, teller@csail.mit.edu http://rvsn.csail.mit.edu/graphoptim

  2. Where are we going? + =

  3. Feature Pose Constraint Problem summary • Robot moves around environment. Poses are connected by constraints (odometry). • Constraint = rigid body transformation + covariance matrix

  4. Feature Pose Constraint Problem summary • Re-observing features allows constraints between non-successive poses

  5. Problem summary • Goal: find the arrangement of poses and features that best satisfies the constraints. • (e.g., maximizes the probability) Poorly satisfied constraints SLAM An improbable initial configuration Maximum likelihood configuration

  6. Why is this hard? • Huge number of unknowns • |x| = 103is a small problem • Non-linear constraints • Function of robot orientation • Lousy initial estimates • Odometry! • (Today, we’ll ignore SLAM’s other difficult sub-problems: data association, exploration) Cost surface from a laser-scan matching problem. Many local maxima/minima

  7. Our Work • We present an algorithm for optimizing pose graphs • Very fast • Estimates non-linear maximum likelihood • Unlike EKF, EIF and friends • Works well even with poor initial estimates • Under 100 lines of code

  8. A peek at results Noisy (simulated) input: 3500 poses 3499 temporal constraints 2100 spatial constraints Ground Truth Gauss-Seidel, 60 sec. Multi-Level Relaxation, 8.6 sec. Our method, 2.8 sec.

  9. Our Method: Overview • Marginalize out features • Use “incremental” state space • Enforce constraints serially • Linearize constraint to global incremental constraint • Taking smaller steps as t→∞

  10. Marginalizing Features • Marginalize features, leaving only trajectory • Problem now involves only inter-pose constraints • Then compute features (trivial, as in FastSLAM) Marginalize

  11. Marginalization: Cons • Con: More edges in graph • Feature with N observations leads to O(N2)edges • Slower/harder to solve • (Information Matrix less sparse) Marginalize

  12. Marginalization: Not so bad? • Pro: Pose-pose edges better model data inter-dependence • “This cloud of points matches this cloud of points” • Individual point associations are not independent. • More correct to use a single “lumped” constraint • Bonus: Makes it easier to undo bad data associations later Observations at t=5 Observations at t=500 Data association Lumped Constraint

  13. Iterative Optimization • Do forever: • Pick a constraint • Descend in direction of constraint’s gradient • Scale gradient magnitude by alpha/iteration • Clamp step size • iteration++ • alpha/iteration→0 as t→∞ • Robustness to local concavities • Hop around the state space, “stick” in the best one • Good solution very fast, “perfect” solution only as t→∞

  14. 2 3 1 constraint 5 7 6 error 6 Importance of state space • Choice of state representation affects gradient • Pick a state space that: • Is computationally efficient • Interacts well with optimization algorithm

  15. Previous work: global state space • Global state: x = [ x0 y0 θ0 x1 y1 θ1 x2 y2 θ2 … ]T • Constraint between pose a and b = f (xa,ya,θa,xb,yb,θb) • Gradient = 0 for all but poses a and b 2 2 3 3 1 1 4 4 constraint 5 5 7 7 6 error 6 6 • Slow convergence • Error for some links can go up dramatically Step direction

  16. 2 3 1 4 5 7 6 State Space: Relative/Incremental • Robot motion is cumulative • Adjustment of one pose affects neighbors • Each constraint affects multiple nodes 2 3 1 4 5 7 6 6 • Faster convergence • Error decreases more predictably Step direction

  17. 2 3 1 4 5 7 6 Relative versus Incremental • Relative (prior work): • State vector: rigid-body transformations P3 = P0T1T2T3 • More realistic • Projective • Small adjustments can have large effects • Instability • O(N) time to compute update • Incremental (contribution): • State vector: global-relative motions P3 = P0 + D1 + D2 + D3 • Less realistic (inexact) • Behaves linearly • Small adjustments have small effects • Good convergence properties • O(log N) to compute update

  18. Algorithm Complexity • For N poses, M constraints: • Memory: O(N+M) • 1M poses, 2M constraints → 176MB • EKF/EIF: O(N2) • Time: • Impose constraint: O(log N) • (using incremental state space and tree-based data structure) • One full iteration: O(M log N) • Convergence: hard to make guarantees (nonlinear problem) • Δx bounded at each time step, but ∫ Δx dt might not be • In practice, SLAM problems fairly well behaved after a few iterations

  19. Gauss Seidel Relaxation(exponential time scale)

  20. Our algorithm(slowed down by ~4x)

  21. Optimization result after 8.6 seconds Converges to (very) good result in 30 minutes Thanks to Udo Frese for running this data set Multi-Level Relaxation

  22. Killian Court (Real Data) Laser-scan derived open-loop estimate

  23. Continuing work… • Several 3D implementations under way • Both with and without full-rank constraints • Incremental (not batch) version • Adaptive learning rate • Reports of success with our algorithm from other universities…

  24. Questions? • Updated paper, movies, source code: • http://rvsn.csail.mit.edu/graphoptim

More Related