1 / 58

Introduction to Simultaneous Locomotion and Mapping

Introduction to Simultaneous Locomotion and Mapping. Overview. Definition of SLAM Why it's a hard problem Introduction to Kalman Filters Applying Kalman Filters to Solve SLAM Problems Visualizations of Solutions A Nod to Other Approaches. Definition – Localization and Mapping.

Download Presentation

Introduction to Simultaneous Locomotion and Mapping

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Simultaneous Locomotion and Mapping

  2. Overview • Definition of SLAM • Why it's a hard problem • Introduction to Kalman Filters • Applying Kalman Filters to Solve SLAM Problems • Visualizations of Solutions • A Nod to Other Approaches

  3. Definition – Localization and Mapping • Simultaneous Localization and Mapping (SLAM) asks • Where am I in the world reference frame? • What are the obstacles and features of the world? • These problems are the Localization and Mapping problems • Localization : given a map, observations, and actions, determine the location of the robot • Mapping: given a trajectory and observations, determine a map of the environment

  4. Odometry • The method of determining location solely from the internal sensors of the robot – no map! • Integrate actions to get new state. & Single-Disk Shaft Encoder A perforated disk is mounted on the shaft and placed between the emitter–detector pair. As the shaft rotates, the holes in the disk chop the light beam.

  5. Uncertainty from Noise • However, our calculated trajectory is inaccurate due to noise factors • Due to calibration, vibration, slippage, geometric differences, • The uncertainty grows with time.

  6. Localization With Landmarks • We can use observations to reduce the uncertainty in our pose estimations.

  7. Mapping • Complement problem to localization: We know • our position, x[k], • our history of positions, X[k-1], and • noisy observations of the environment, Z[k], and • we want to make a map of the environment, M.

  8. What is a map? • Set of features with relationships. • Features • Occupancy grid • Lines • Anything identifiable by the robot • Relations • Rigid-body transformation • Topological

  9. SLAM Combines Both Unknown Map Neither the map nor the location is known.

  10. Formal Problem Statement • What isP(x[k], m|Z[0:k], U[0:k], x[0])? • Problem Quantities • x[k]: location of vehicle at time k • u[k]: a control vector applied at k-1 to drive the vehicle from x[k-1] to x[k] • m[i]: location of ith landmark • z[k]: observation of a landmark taken at time k • X[0:k] : history of states {x[1],x[2], x[3], ..., x[k]} • U[0:k] : history of control inputs {u[1], u[2], u[3], ..., u[k]} • m: set of all landmarks • Z[0:k] : history of all observations {z[1], z[2], ..., z[k]} m[i] • P(x[k], m|Z[0:k], U[0:k], x[0])‏ m[i]

  11. The “Holy Grail” • With SLAM, robots gain a major step toward autonomy. They can be placed at an unknown location, with unknown surroundings, and they can work out what's around them and where they are.

  12. Difficulties • All our knowledge is probabilistic • We have a “chicken and egg” problem: • If we knew the map, we could calculate our location • If we knew our location, we could calculate the map

  13. Kalman Filter: Overview • The Kalman Filter (KF) is a recursive algorithm to estimate the state of a process based on • A model of how the process changes, • A model of what observations we expect, and • Discrete, noisy observations of the process. • Predictions of the state of the processare made based on a model. • Observations are made and our estimate is updated given thenew observation.

  14. Foundations of Kalman Filter • The purpose of the KF is to calculate the distribution of the state at time k, given noisy observations. • However, filtering continuous distributions generates state distributions whose representation grows without bound. • An exception to this “rule” is the Gaussian distribution, which only needs a mean and variance to define the distribution.

  15. Linear Gaussian Distributions • Under the continuous operations needed for KFs • Conditioning • Bayes' ruleIf our prior distribution is a Gaussian, and our transition models are linear Gaussian, then our calculated distribution is also Gaussian.

  16. Conditioning • We can find P(Y) = Sum of P(Y,z) over all z. • P(y,z) = 0.4, P(y, not z) = 0.2, :. P(y) = 0.6 • And P(Y,z) = P(Y|z)P(z) from the product rule. • So P(Y) = Sum of P(Y|z)P(z) • For continuous distributions, this becomes P(Y) = Integral of P(Y|z)P(z) over all zwhich is where we get ..but if our P(x) is only true given e[0:k], so is our result..

  17. Bayes' Rule • We know that P(a and b) = P(a|b)P(b) =P(b and a) = P(b|a)P(a)‏ • Therefore, P(b|a) = P(a|b)P(b)/P(a)‏ • With additional evidence, this becomes P(b|a, e) = P(a|b,e)P(b|e)/P(a|e)‏ • Notice that this is a probability distribution, and all values will sum to one – P(a|e) just acts as a normalizer. • This yields

  18. KF Recursion • We want , and we have . • Note that by using conditioning, we can calculateas long as we know , which is referred to as the motion model. • Also, by the generalized Bayes' rule, we haveassuming we know , the sensor model.

  19. Process Motion Model • The model of how the process changes is referred to as the motion model. A new state of the process is assumed to be a linear function of the previous state and the control input with Gaussian noise, p(w) ~ N(0,Q), added.x[k] = Ax[k - 1] + Bu[k – 1] + w[k – 1]where A and B are matrices capturing our process model.

  20. Process Sensor Model • The sensor model is also required to be a linear Gaussianz[k] = Hx[k] + v[k]where H is a matrix that relates the state to the observation and p(v) ~ N(0,R).

  21. Covariance • The multi-dimensional analog of variance:expected squared difference between elements and the mean. It quantifies the uncertainty of our estimate.

  22. Kalman Filter Algorithm • The Kalman Filter operates on Gaussian distributions, and so only needs to calculate a mean, x, and a covariance, C. • Prediction stepx[k+1|k] = Ax[k] + Bu[k]C[k+1|k] = AC[k]AT + Q = cov(Ax[k],Ax[k]) + Q • Updatex[k+1|k+1] = x[k+1|k] + K(z[k+1] – Hx[k+1|k])C[k+1] = (I – KH)C[K+1|k]

  23. K, Kalman Gain Factor • K is a measure of how to weight the prediction and the observation • A high value of K results in the measurement being trusted • A low value of K results in the prediction being trusted. • K is based on the covariances K = P[k+1|k]HT(HP[k+1|k]HT + R)-1 • Limit of K as R -> 0 = H-1 • Limit of K as P[k+1|k] -> 0 = 0

  24. Initial State • With the previous formulas, we can calculate an update. What's our initial state? • We need x[0], C[0], R, Q. • X[0] is our initial guess at the process state • C[0] is how certain we our of our guess • R is the noise in the sensor model (measured)‏ • Q is the noise in the process model (estimated)‏

  25. Quick Example • Estimate a slightly variable 1D signal by noisy observationx[k+1] = x[k] + wz[k+1] = x[k] + v • Therefore, A = 1, B = 0, H = 1. • Let's assume Q = 0.00001 and R = 0.01 and x[0] = 0 and C[0] = 1*Q and R often need to be tuned for performance

  26. Result

  27. Probabalistic SLAM • Is SLAM a Kalman Filter problem? • We want , and we can assume we know the previous state, . • And with conditioning ...gets us the predicted next state. • And the generalized Bayes' rule integrates z[k]. • So we need and .

  28. Kalman Formulation for SLAM • looks exactly like the motion model from the Kalman Filter. • And if we incorporate the landmark locations into our system state, x[k], then is the same as the sensor model. • We assume landmarks don't move, sop[k+1] = p[k]. • Our observations are of landmarks,z[k] = Hpp-Hrr[k] + wwhere Hpp-Hrr[k] = Hx[k] and r is the robot pose.

  29. SLAM Example • Suppose we have a robot moving in 1D that moves a certain velocity every timestep. Also assume the velocity has noise added, • Also assume the landmarks don't move, • This provides our motion model.

  30. SLAM Example • Further, the observations we expect are • Thus we have A and H, respectively below, x[0], R, Q, and C[0] must be given, and B is [0].

  31. Adding Landmarks • As more landmarks are discovered, they are added to the state. • All the problem matrices are updated and their dimension is increased.

  32. Why this Works • Kalman Filters have been proven to solve the SLAM problem by Dissanyake, et. al. • This is due to the fact that the correlation between landmarks monotonically increases • Thus, relative positions of the landmarks become well known, and thus a map of the landmarks relative to eachother becomes well known. • Thus, the solution converges.

  33. Limitations of Kalman Filter • Unimodal distribution cannot account for “choice.” • Computation is O(n2) where n is the size of the state vector. • Landmark association and loop closure. • Requires a linear process model for the system. • This final weakness is addressed by the Extended Kalman Filter (EKM).

  34. Extended Kalman Filter • The assumption of linear motion models and sensor models is a serious limitation. • What if we let either of these models be non-linear?x[k+1] = f(x[k], u[k], w[k])z[k] = h(x[k], v[k])where f and h can be non-linear and v and w are zero mean Gaussian noise.

  35. EKM – Mean Calculation • Like the Kalman Filter, the EKM works by calculating a new distribution mean and covariance given the models and observations. • The means are propagated by the models, without the noise,x[k + 1] = f(x[k], u[k], 0)z[k] = h(x[k], 0)

  36. EKM – Covariances • The covariance estimates are based on a linearization of the models using Jacobians.C[k + 1] = F[k]C[k]F[k] + W[k]Q[k]W[k]where C is the covariance matrix, F[k] is the Jacobian of f with respect to the state variables and W[k] is the Jacobian of f with respect to the noise parameters, and Q is the covariance of the motion model's noise.

  37. EKM – Gain • Finally, we need a new form for KK[k] = C[k]J[k]T(J[k]P[k]J[k]T + V[k]R[k]V[k]T)-1where C is our covariance, J is the Jacobian of h, the sensor model, with respect to the state variables, V is the Jacobian of the sensor model with respect to its noise, and R is the covariance of the noise in the sensor model.

  38. EKM Example • A car model requires a non-linear motion model dx = v * cos(theta) dy = v * sin(theta) dtheta = (v * tan(phi)) / L

  39. Example Motion Model • Fixed steering angle,fixed velocity

  40. Example Observation Model • Let's assume our observations are taken from an overhead camera.z[k+1] = z[k] + v[k]where v[k] is noise.

  41. Prediction Step • Prediction of the mean is calculated from motion model. x[k + 1|k] = f(x[k], u[k], 0)‏ • Jacobians needed to calculate the new covariance.

  42. Update Step • Predicted measurement given by the sensor model z[k+1|k] = h(x[k], 0)‏ • Final estimate is x[k+1] = x[k+1|k] + K(z[k] – z[k+1|k] • This requires the gain, K[k] = C[k]J[k]T(J[k]P[k]J[k]T + V[k]R[k]V[k]T)-1

  43. Update Step cont. • The Gain requires two JacobiansK[k] = C[k]J[k]T(J[k]P[k]J[k]T + V[k]R[k]V[k]T)-1 • All that's left is the final covariance, C[k+1] C[k+1] = (I - K[k]J[k])C[k+1|k]

  44. Result • With blue as the true trajectory, red as the observation, and green as the estimate.

  45. EKM Issues • Landmark association and loop closure. • Unimodal distribution cannot account for “choice.” • Computation is O(n2) where n is the size of the state vector.

  46. Landmark Association • How can the robot tell if it's looking at landmark m[i] or m[j]? • What if your landmarks are just laser ranges?

  47. Constructing lines from Scans

  48. Scan Matching • The robot scans, moves, and scans again. • Short term motion noise results in features having to be recognized.

  49. Iterated Closest Point • Inputs: two point clouds • Output: refined transformation. • Method: • 1. Associate points by the nearest neighbor criteria. • 2. Estimate the parameters using a mean square cost function. • 3. Transform the points using the estimated parameters. • 4. Iterate (re-associate the points and so on).

More Related