280 likes | 571 Views
Markoviana Reading Group: Week3. Outline. IntroductionGaussian DistributionIntroductionExamples (Linear and Multivariate)Kalman FiltersGeneral PropertiesUpdating Gaussian DistributionsOne-dimensional ExampleNotes about general caseApplicability of Kalman FilteringDynamic Bayesian Networks
E N D
1. Kalman Filters andDynamic Bayesian Networks Markoviana Reading Group
Srinivas Vadrevu
Arizona State University
2. Markoviana Reading Group: Week3 Outline Introduction
Gaussian Distribution
Introduction
Examples (Linear and Multivariate)
Kalman Filters
General Properties
Updating Gaussian Distributions
One-dimensional Example
Notes about general case
Applicability of Kalman Filtering
Dynamic Bayesian Networks (DBNs)
Introduction
DBNs and HMMs
DBNs and HMMs
Constructing DBNs
3. Markoviana Reading Group: Week3 HMMs and Kalman Filters Hidden Markov Models (HMMs)
Discrete State Variables
Used to model sequence of events
Kalman Filters
Continuous State Variables, with Gaussian Distribution
Used to model noisy continuous observations
Examples
Predict the motion of a bird through dense jungle foliage at dusk
Predict the direction of the missile through intermittent radar movement observations
4. Markoviana Reading Group: Week3 Gaussian (Normal) Distribution Central Limit Theorem: The sum of n statistical independent random variables converges for n ? 8 towards the Gaussian distribution (Applet Illustration)
Unlike the binomial and Poisson distribution, the Gaussian is a continuous distribution:
??= mean of distribution (also at the same place as mode and median)
?2 = variance of distribution
y is a continuous variable (-8???y ??8?
Gaussian distribution is fully defined by its mean and variance
5. Markoviana Reading Group: Week3 Gaussian Distribution: Examples Linear Gaussian Distribution
Mean, ? and Variance, ?
Multivariate Gaussian Distribution
For 3 random variables
Mean, ? = [m1 m2 m3]
Covariance Matrix, Sigma = [ v11 v12 v13
v21 v22 v23
v31 v32 v33 ]
6. Markoviana Reading Group: Week3 Kalman Filters: General Properties Estimate the state and the covariance of the state at any time T, given observations, xT = {x1,
, xT}
E.g., Estimate the state (location and velocity) of airplane and its uncertainty, given some measurements from an array of sensors
The probability of interest is P(yt|xT)
Filtering the state
T = current time, t
Predicting the state
T < current time, t
Smoothing the state
T > current time, t
7. Markoviana Reading Group: Week3
8. Markoviana Reading Group: Week3
9. Markoviana Reading Group: Week3 Gaussian Noise & Example Next State is linear function of current state, plus some Gaussian noise
Position Update:
Gaussian Noise:
10. Markoviana Reading Group: Week3 Updating Gaussian Distributions Linear Gaussian family of distributions remains closed under standard Bayesian network operations
One-step predicted distribution
Current distribution P(Xt|e1:t) is Gaussian
Transition model P(Xt+1|xt) is linear Gaussian
The updated distribution
Predicted distribution P(Xt+1|e1:t) is Gaussian
Sensor model P(et+1|Xt+1) is linear Gaussian
Filtering and Prediction (From 15.2):
11. Markoviana Reading Group: Week3
12. Markoviana Reading Group: Week3
13. Markoviana Reading Group: Week3
14. Markoviana Reading Group: Week3
15. Markoviana Reading Group: Week3
16. Markoviana Reading Group: Week3
17. Markoviana Reading Group: Week3
18. Markoviana Reading Group: Week3
19. Markoviana Reading Group: Week3
20. Markoviana Reading Group: Week3
21. Markoviana Reading Group: Week3 One-dimensional Example Update Rule (Derivations from Russel & Norvig)
Compute new mean and covariance matrix from the previous mean and covariance matrix
Variance update is independent of the observation
Another variation of the update rule (from Max Welling, Caltech)
22. Markoviana Reading Group: Week3 The General Case Multivariate Gaussian Distribution
Exponent is a quadratic function of the random variables xi in x
Temporal model with Kalman filtering
F: linear transition model
H: linear sensor model
Sigma_x: transition noise covariance
Sigma_z: sensor noise covariance
Update equations for mean and covariance
Kt+1: the Kalman gain matrix
F?t: predicted state at t+1
HF?t: the predicted observation
Zt+1 HFt: error in predicted observation
23. Markoviana Reading Group: Week3 Illustration
24. Markoviana Reading Group: Week3 Applicability of Kalman Filtering Popular applications
Navigation, guidance, radar tracking, sonar ranging, satellite orbit computation, stock prize prediction, landing of Eagle on Moon, gyroscopes in airplanes, etc.
Extended Kalman Filters (EKF) can handle Nonlinearities in Gaussian distributions
Model the system as locally linear in xt in the region of xt = ?t
Works well for smooth, well-behaved systems
Switching Kalman Filters: multiple Kalman filters in parallel, each using different model of the system
A weighted sum of predictions used
25. Markoviana Reading Group: Week3 Applicability of Kalman Filters
26. Markoviana Reading Group: Week3 Dynamic Bayesian Networks Directed graphical models of stochastic processes
Extend HMMs by representing hidden (and observed) state in terms of state variables, with possible complex interdependencies
Any number of state variables and evidence variables
Dynamic or Temporal Bayesian Network???
Model structure does not change over time
Parameters do not change over time
Extra hidden nodes can be added (mixture of models)
2TBN
Structure is replicated from slice to slice
Stationary First-Order Markov process
27. Markoviana Reading Group: Week3 DBNs and HMMs HMM as a DBN
Single state variable and single evidence variable
Discrete variable DBN as an HMM
Combine all state variables in DBN into a single state variable (with all possible values of individual state variables)
Efficient Representation (with 20 boolean state variables, DBN needs 160 probabilities, whereas HMM needs roughly a trillion probabilities)
Analogous to Ordinary Bayesian Networks vs Fully Tabulated Joint Distributions
28. Markoviana Reading Group: Week3 DBNs and Kalman Filters Kalman filter as a DBN
Continuous variables and linear Gaussian conditional distributions
DBN as a Kalman Filter
Not possible
DBN allows any arbitrary distributions
Lost keys example
29. Markoviana Reading Group: Week3 Constructing DBNs Required information
Prior distributions over state variables P(X0)
The transition model P(Xt+1|Xt)
The sensor model P(Et|Xt)
Intra-Slice topology
Inter-Slice topology (2TBN assumption)