1 / 35

Probability, Gaussians and Estimation

Probability, Gaussians and Estimation. David Johnson. Basic Problem. Approaches so far Robot state is a point in state space q = ( x, y, vx, vy, heading ) This state based on measurements External (GPS, beacons, vision) Internal (odometry, gyros) Measurements not exact

tierra
Download Presentation

Probability, Gaussians and Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probability, Gaussians andEstimation David Johnson

  2. Basic Problem • Approaches so far • Robot state is a point in state space • q = ( x, y, vx, vy, heading ) • This state based on measurements • External (GPS, beacons, vision) • Internal (odometry, gyros) • Measurements not exact • Errors can accumulate • How to “clean” measurements • filtering • How to combine measurements? • estimation

  3. Problems to solve • Localization • Given a map where am I? • Mapping • Given my position, build a map • SLAM – Simultaneous Localization and Mapping • Drop down a robot, build a map and location within in at the same time

  4. Approach • Treat state variables as probabilities • Combine measurements weighted by reliability • Use filtering to improve estimate of state

  5. Example - Triangulation • Time of flight from beacons gives distance • Constrains to a circle • 2 beacons to point solutions

  6. Noise in Measurements • Uncertainty in measurement • Can be reported as plus/minus some value • Creates solution regions • Even this is a simplification • Measurements follow a distribution

  7. Coin Flip • F=(head, tails) • Discrete distribution • Probability of a coin flip being head or tails is 0.5 + 0.5 = 1 • But what about continuous distributions? • Probability someone in the room is exactly 2 meters tall is infinitesimal • Talk about probability of intervals instead

  8. f(x) x Continuous distributions • Probability density function f(x) • Find probability of a measurement being within an interval • What is f(x) for a uniform distribution over range [u,v]?

  9. Gaussian distributions • Bell-shaped f(x) • Can assume most measurements with noise follow a Gaussian distribution • Why • Central Limit Theorem • Applet

  10. p ( x ) ~ Ν ( μ , Σ ) : 1 1 - t 1 - - - x μ Σ x μ ( ) ( ) = p ( x ) e 2 1 / 2 p d / 2 ( 2 ) Σ Gaussian Definition m s 2 p ( x ) ~ N ( , ) : 2 - m 1 ( ) x 1 m - = 2 s 2 p ( x ) e p s 2 Univariate -s s m Multivariate

  11. The Mean of a Continuous Distribution

  12. Discrete Variance vs Continuous • Discrete • Continuous

  13. Gaussians • Gaussians completely described by mean and variance • Non-zero mean implies a bias in measurement • Zero mean can be removed by filtering

  14. Properties of Gaussians

  15. Filtering • Gaussian noise = N(0, ) • Make repeated measurements • Histogram the samples • Find the peak – that is the mean • Easy! • What is the size of the whiteboard in meters (1 decimal place precision)? s 2

  16. Non-static situation • What happens when state evolves? • Can’t repeat measurements • Moving average filter • Introduces lag into system!

  17. Use a state model • Estimate position from measurements • Measure velocity as well • Evolve position from velocity • Incorporate evolved state into position measurements • Need to combine multiple, uncertain measurements

  18. Back to the non-evolving case • Two different processes measure the same thing • Want to combine into one better measurement • Estimation

  19. Estimator Data + noise Data + noise Data + noise Estimation Estimation What is meant by estimation? z estimate H ŷ Stochastic process

  20. A Least-Squares Approach • We want to fuse these measurements to obtain a new estimate for the range • Using a weighted least-squares approach, the resulting sum of squares error will be • Minimizing this error with respect to yields

  21. A Least-Squares Approach • Rearranging we have • If we choose the weight to be • we obtain

  22. A Least-Squares Approach • For merging Gaussian distributions, the update rule is Show for N(0,a) N(0,b)

  23. Kalman Gain A Least-Squares Approach • This can be rewritten as • or if we think of this as adding a new measurement to our current estimate of the state we would get • For merging Gaussian distributions, the update rule is • which if we write in our measurement update equation form we get show

  24. What happens when you move? derive

  25. Moving • As you move • Uncertainty grows • Need to make new measurements • Combine measurements using Kalman gain

  26. OPTIMAL: • Linear dynamics • Measurements linear w/r to state • Errors in sensors and dynamics must be zero-mean (un-bias) white Gaussian • RECURSIVE: • Does not require all previous data • Incoming measurements ‘modify’ current estimate The Kalman Filter DATA PROCESSING ALGORITHM: The Kalman filter is essentially a technique of estimation given a system model and concurrent measurements (not a function of frequency) “an optimal recursive data processing algorithm”

  27. Estimate the state of a discrete-time controlled process that is governed by the linear stochastic difference equation: with a measurement: The random variables wk and vk represent the process and measurement noise (respectively). They are assumed to be independent (of each other), white, and with normal probability distributions In practice, the process noise covariance and measurement noise covariance matrices might change with each time step or measurement. The Discrete Kalman Filter (PDFs)

  28. State transition Control signal The Discrete Kalman Filter State prediction “prior” estimate Process noise covariance First part – model forecast: prediction Error covariance prediction Prediction is based only the model of the system dynamics.

  29. actual measurement “prior” state prediction The Discrete Kalman Filter state correction Kalman gain predicted measurement “posterior” estimate update error covariance matrix (posterior) Second part – measurement update: correction

  30. variance of the predicted states = ------------------------------------------------------------ variance of the predicted + measured states The Discrete Kalman Filter measurement sensitivity matrix measurement noise covariance The Kalman gain, K:“Do I trust my model or measurements?”

  31. Estimate a constant voltage • Measurements have noise • Update step is • Measurement step is

  32. Results

  33. Variance

  34. Parameter tuning

  35. More tuning

More Related