1 / 36

Instructor: Eyal Amir Grad TAs : Wen Pu, Yonatan Bisk Undergrad TAs : Sam Johnson, Nikhil Johri

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #23. Instructor: Eyal Amir Grad TAs : Wen Pu, Yonatan Bisk Undergrad TAs : Sam Johnson, Nikhil Johri. Today & Thursday. Time and uncertainty Inference: filtering, prediction, smoothing Hidden Markov Models (HMMs)

savea
Download Presentation

Instructor: Eyal Amir Grad TAs : Wen Pu, Yonatan Bisk Undergrad TAs : Sam Johnson, Nikhil Johri

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 440 / ECE 448Introduction to Artificial IntelligenceSpring 2010Lecture #23 Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri

  2. Today & Thursday • Time and uncertainty • Inference: filtering, prediction, smoothing • Hidden Markov Models (HMMs) • Model • Exact Reasoning

  3. Time and Uncertainty • Standard Bayes net model: • Static situation • Fixed (finite) random variables • Graphical structure and conditional independence • In many systems, data arrives sequentially • Dynamic Bayes nets (DBNs) and HMMs model: • Processes that evolve over time

  4. Example (Robot Position) Pos2 Pos3 Pos1 Sensor 1 Sensor 3 Sensor2 Vel 1 Vel 2 Vel 3 Sensor1 Sensor3 Sensor 2

  5. Robot Position(With Observations) Pos2 Pos3 Pos1 Sens.A 1 Sens.A3 Sens.A2 Vel 1 Vel 2 Vel 3 Sens.B1 Sens.B3 Sens.B 2

  6. Inference Problem • State of the System at time t: • Probability distribution over states: • A lot of parameters

  7. Solution (Part 1) • Problem: • Solution: Markov Assumption • Assume is independent of given • State variables are expressive enough to summarize all relevant information about past • Therefore:

  8. Solution (Part 2) • Problem: • If all are different • Solution: • Assume all are the same • The process is time-invariant or stationary

  9. Inference in Robot Position DBN • Compute distribution over true position and velocity • Given a sequence of sensor values • Belief state: • Probability distribution over different states at each time step • Update belief state when a new set of sensor readings arrive

  10. Example • First order Markov assumption not exactly true in real world

  11. Example • Possible fixes: • Increase order of Markov process • Augment state, e.g., add Temp, Pressure Or battery to position and velocity

  12. Today • Time and uncertainty • Inference: filtering, prediction, smoothing • Hidden Markov Models (HMMs) • Model • Exact Reasoning • Dynamic Bayesian Networks • Model • Exact Reasoning

  13. Inference Tasks • Filtering: • Belief state: probability of state given the evidence • Prediction: • Like filtering without evidence • Smoothing: • Better estimate of past states • Most likelihood explanation: • Scenario that explains the evidence

  14. Filtering (forward algorithm) Xt+1 Xt Xt-1 Et-1 Et+1 Update: Et Predict: Recursive step

  15. Example

  16. Smoothing Forward backward

  17. SmoothingBackWard Step

  18. Most Likely Explanation • Finding most likely path Xt+1 Xt Xt-1 Et-1 Et+1 Et Most likely path to xt Plus one more update

  19. Most Likely Explanation • Finding most likely path Xt+1 Xt Xt-1 Et-1 Et+1 Et Called Viterbi

  20. Viterbi(Example)

  21. Viterbi(Example)

  22. Viterbi(Example)

  23. Viterbi(Example)

  24. Viterbi(Example)

  25. Today • Time and uncertainty • Inference: filtering, prediction, smoothing, MLE • Hidden Markov Models (HMMs) • Model • Exact Reasoning • Dynamic Bayesian Networks • Model • Exact Reasoning

  26. X1 X2 X3 Y1 Y3 Y2 Sparse transition matrix ) sparse graph Hidden Markov model (HMM) “True” state Phones/ words Noisy observations acoustic signal transitionmatrix Diagonal Matrix

  27. Forwards algorithm for HMMs Predict: Update:

  28. at|t-1 Xt+1 Xt-1 Xt bt+1 bt Yt-1 Yt+1 Yt Message passing view of forwards algorithm

  29. Forwards-backwards algorithm bt at|t-1 Xt-1 Xt Xt+1 bt Yt-1 Yt+1 Yt

  30. If Have Time… • Time and uncertainty • Inference: filtering, prediction, smoothing • Hidden Markov Models (HMMs) • Model • Exact Reasoning • Dynamic Bayesian Networks • Model • Exact Reasoning

  31. Dynamic Bayesian Network • DBN is like a 2time-BN • Using the first order Markov assumptions Time 0 Time 1 Standard BN Standard BN

  32. Dynamic Bayesian Network • Basic idea: • Copy state and evidence for each time step • Xt: set of unobservable (hidden) variables (e.g.: Pos, Vel) • Et: set of observable (evidence) variables (e.g.: Sens.A, Sens.B) • Notice: Time is discrete

  33. Example

  34. Inference in DBN Unroll: Inference in the above BN Not efficient (depends on the sequence length)

  35. T T(t+1) T(t+1) T 0.91 0.09 F 0.0 1.0 DBN Representation: DelC RHM R(t+1) R(t+1) T 1.0 0.0 F 0.0 1.0 RHMt RHMt+1 fRHM(RHMt,RHMt+1) Mt Mt+1 fT(Tt,Tt+1) Tt Tt+1 L CR RHC CR(t+1) CR(t+1) O T T 0.2 0.8 E T T 1.0 0.0 O F T 0.0 1.0 E F T 0.0 1.0 O T F 1.0 0.1 E T F 1.0 0.0 O F F 0.0 1.0 E F F 0.0 1.0 Lt Lt+1 CRt CRt+1 RHCt RHCt+1 fCR(Lt,CRt,RHCt,CRt+1)

  36. RHMt RHMt+1 Mt Mt+1 Tt Tt+1 s1 s2 ... s160 Lt Lt+1 s1 0.9 0.05 ... 0.0 s2 0.0 0.20 ... 0.1 . . . s160 0.1 0.0 ... 0.0 CRt CRt+1 RHCt RHCt+1 Benefits of DBN Representation Pr(Rmt+1,Mt+1,Tt+1,Lt+1,Ct+1,Rct+1 | Rmt,Mt,Tt,Lt,Ct,Rct) = fRm(Rmt,Rmt+1) * fM(Mt,Mt+1) * fT(Tt,Tt+1) * fL(Lt,Lt+1) * fCr(Lt,Crt,Rct,Crt+1) * fRc(Rct,Rct+1) • Only few parameters vs. • 25440 for matrix • Removes global exponential • dependence

More Related