1 / 28

Probabilistic Reasoning Over Time Using Hidden Markov Models

Probabilistic Reasoning Over Time Using Hidden Markov Models. Minmin Chen. Contents. 15.1~15.3. Time and Uncertainty. Noisy sensor. Agent: security guard at some secret underground installation Observation: Is the director coming with an umbrella State: Rain or not. Not fully observable.

leda
Download Presentation

Probabilistic Reasoning Over Time Using Hidden Markov Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic Reasoning Over Time Using Hidden Markov Models Minmin Chen

  2. Contents • 15.1~15.3

  3. Time and Uncertainty Noisy sensor • Agent: security guard at some secret underground installation • Observation: Is the director coming with an umbrella • State: Rain or not Not fully observable time

  4. Time and Uncertainty Noisy sensor • Observation: • Measured Heart Rate • Electrocardiogram (ECG) • Patient’s Activity • State • Atria Fibrillation? • Tachycardia? • Bradycardia? Not fully observable time

  5. States and Observations • Unobservable state variable : Xt • Observable evidence variable: Et • Example 1: for each day • U1,U2,U3, …… • R1, R2, R3, …… • Example 2: for each recording • Et = {Measured_heart_ratet, ECG t, activity t} • Xt = {AF t, Tachycardia t, Bradycardiat}

  6. Assumption1: Stationary Process • Changing world • Unchanged laws • remains the same for different t

  7. Assumption 2: Makrov Process • Current states depends only on a finite history of previous states • First-order markov process • States • Transition Probability Matrix • Initial Distribution

  8. Assumption 3: Restriction to the Parents of Evidence • The evidence variable at time t only depends on the current state:

  9. Hidden Markov Model • Hidden state • sequence • Evidence sequence Rt-1 Rt Rt+1 Ut-1 Ut Ut+1

  10. Joint Distribution of HMMs Bayes rule Chain rule Conditional independence

  11. Example • DAY: 1 2 3 4 5 • Umbrella: true true false true true • Rain: true true false true true

  12. Example

  13. How True These Assumptions are • Depends on the problem domain • To overcome violations to the assumptions • Increasing the order of Markov process model • Increasing the set of state variables

  14. Inference in Temporal Models • Filtering: • posterior distribution over the current state, given all evidence to date • Prediction: • Posterior distribution over the future state, given all evidence to date • Smoothing: • Posterior distribution over a past state, given all evidence to date • Most likely explanation: • The sequence of states most likely to generate those observations

  15. Filtering & Prediction Transition model Posterior distribution at time t Prediction Sensor model Filtering

  16. Proof Bayes Rule Chain Rule Conditional Independence Marginal Probability Chain Rule Conditional Independence Forward Alg

  17. Interpretation & Example 0.7 0.9 0.5 0.5 0.45 0.3 0.3 0.5 0.5 0.1 0.7 0.2 U1=true U2=true

  18. Interpretation &Example 0.7 0.9 0.7 0.9 0.5 0.5 0.818 0.627 0.565 0.3 0.3 0.3 0.3 0.5 0.5 0.182 0.373 0.075 0.7 0.2 0.7 0.2 U1=true U2=true

  19. Interpretation & Example 0.7 0.9 0.7 0.9 0.5 0.5 0.818 0.627 0.883 0.3 0.3 0.3 0.3 0.5 0.5 0.182 0.373 0.117 0.7 0.2 0.7 0.2 U1=true U2=true

  20. Likelihood of Evidence sequence • The likelihood of the evidence sequence • The forward algorithm computes

  21. Smoothing Divide Evidence Bayes Rule Chain Rule Conditional Independence

  22. Intuition Sensor model Backward message at time k+1 Sensor model Backward Message at time k

  23. Backward Marginal Probability Chain Rule Conditional Independence Conditional Independence Backward Alg

  24. Interpretation & Example 0.7 0.9 0.5 0.818 0.69 0.9 1 0.883 0.3 0.3 0.5 0.182 0.41 0.2 1 0.117 0.2 0.7 U1=true U2=true

  25. Finding the Most Likely Sequence true true true true true true true true true true

  26. Finding the Most Likely sequence • Enumeration • Enumerate all possible state sequence • Compute the joint distribution and find the sequence with the maximum joint distribution • Problem: total number of state sequence grows exponentially with the length of the sequence • Smooth • Calculate the posterior distribution for each time step k • In each step k, find the state with maximum posterior distribution • Combine these states to form a sequence • Problem:

  27. Viterbi Algorithm true true false true true .8182 .5155 .0361 .0334 .0210 .1818 .0491 .1237 .0173 .0024

  28. Proof Divide Evidence Bayes Rule Chain Rule Conditional Independence Chain Rule

More Related