1 / 30

Hidden Markov Model Lecture #6

Hidden Markov Model Lecture #6. Reminder: Finite State Markov Chain. An integer time stochastic process , consisting of a domain D of m states { 1,…,m } and An m dimensional initial distribution vector ( p (1),.., p ( m )). An m×m transition probabilities matrix M= ( a st ).

helene
Download Presentation

Hidden Markov Model Lecture #6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hidden Markov ModelLecture #6 .

  2. Reminder: Finite State Markov Chain • An integer time stochastic process, consisting of a domainD of m states {1,…,m} and • An m dimensional initial distribution vector ( p(1),.., p(m)). • Anm×mtransition probabilities matrix M= (ast) • For each integer L, a Markov Chain assigns probability to sequences (x1…xL) over D (i.e, xiD) as follows: Similarly, (X1,…, Xi ,…)is a sequence of probability distributions over D.

  3. 0.95 0.2 0.5 A B 0.2 0.3 0.05 0.8 D C 1 Ergodic Markov Chains • A Markov chain is ergodic if : • All states are recurrent (ie, the graph is strongly connected) • It is not peridoic • The Fundamental Theorem of Finite-state Markov Chains: • If a Markov Chain is ergodic, then • It has a unique stationary distribution vector V > 0, which is an Eigenvector of the transition matrix. • The distributions Xi , as i∞, converges to V.

  4. Use of Markov Chains: Sequences with CpG Islands Recall from last class: In human genomes the pair CG often transforms to (methyl-C) G which often transforms to TG. Hence the pair CG appears less than expected from what is expected from the independent frequencies of C and G alone. Due to biological reasons, this process is sometimes suppressed in short stretches of genomes such as in the start regions of many genes. These areas are called CpG islands (p denotes “pair”).

  5. Modeling sequences with CpG Island The “+” model: Use transition matrix A+ = (a+st), Where: a+st = (the probability that t follows s in a CpG island) The “-” model: Use transition matrix A- = (a-st), Where: a-st = (the probability that t follows s in a non CpG island)

  6. Question 2: Finding CpG Islands Given a long genomic str with possible CpG Islands, we define a Markov Chain over 8 states, all interconnected (hence it is ergodic): The problem is that we don’t know the sequence of states which are traversed, but just the sequence of letters. A+ C+ G+ T+ A- C- G- T- Therefore we use here Hidden Markov Model

  7. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Hidden Markov Model A Markov chain (s1,…,sL): and for each state s and a symbol x we have p(Xi=x|Si=s) Application in communication: message sent is (s1,…,sm) but we receive (x1,…,xm) . Compute what is the most likely message sent ? Application in speech recognition: word said is (s1,…,sm) but we recorded (x1,…,xm) . Compute what is the most likely word said ?

  8. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Hidden Markov Model Notations: Markov Chain transition probabilities: p(Si+1= t|Si= s) = ast Emission probabilities: p(Xi = b| Si = s) = es(b) For Markov Chains we know: What is p(s,x) = p(s1,…,sL;x1,…,xL) ?

  9. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Hidden Markov Model p(Xi = b| Si = s) = es(b), means that the probability of xidepends only on the probability of si. Formally, this is equivalent to the conditional independence assumption: p(Xi=xi|x1,..,xi-1,xi+1,..,xL,s1,..,si,..,sL) = esi(xi) Thus

  10. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Hidden Markov Model Exercise: Using the definition of conditional probability: P(X|Y) = P(X,Y)/P(Y), prove formally that the equality p(Xi = xi|x1,..,xi-1,xi+1,..,xL,s1,..,si,..,sL) = esi(xi) implies that for any Y  {x1,..,xi-1,xi+1,..,xL,s1,..,si,..,sL}, such that siis in Y, it holds that: p(Xi=xi|Y) = esi(xi)

  11. S1 S2 SL-1 SL X1 X2 XL-1 XL The query of interest: K = * * ( s ,..., s ) max arg p ( s ,..., s | x , , x ) 1 1 1 L L L (s ,..., ) s 1 L Hidden Markov Model for CpG Islands Domain(Si)={+, -}  {A,C,T,G} (8 values) The states: In this representation P(xi| si) = 0 or 1 depending on whether xi is consistent with si . E.g. xi= G is consistent with si=(+,G) and with si=(-,G) but not with any other state of si.

  12. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Hidden Markov Model • Questions: • Given the “visible” sequence x=(x1,…,xL), find: • A most probable (hidden) path. • The probability of x. • For each i = 1,..,L, and for each state k, p(si=k| x)

  13. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL 1. Most Probable state path First Question: Given an output sequence x = (x1,…,xL), Amost probablepaths*= (s*1,…,s*L)is one which maximizes p(s|x).

  14. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Most Probable path (cont.) Since we need to find swhich maximizes p(s,x)

  15. s1 s2 si X1 X2 Xi Viterbi’s algorithm for most probable path The task: compute Let the states be {1,…,m} Idea: for i=1,…,L and for each state l, compute: vl(i) = the probability p(s1,..,si;x1,..,xi|si=l)of a most probable path up to i, which ends in state l.

  16. Si-1 s1 l ... X1 Xi-1 Xi Viterbi’s algorithm for most probable path vl(i) = the probability p(s1,..,si;x1,..,xi|si=l)of a most probable path up to i, which ends in state l. Exercise:For i = 1,…,L and for each state l:

  17. Result: p(s1*,…,sL*;x1,…,xl) = Viterbi’s algorithm s1 s2 sL-1 sL si 0 X1 X2 XL-1 XL Xi We add the special initial state 0. Initialization: v0(0) = 1 , vk(0) = 0 for k > 0 For i=1 to L do for each state l : vl(i) = el(xi) MAXk {vk(i-1)akl } ptri(l)=argmaxk{vk(i-1)akl} [storing previous state for reconstructing the path] Termination:

  18. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL 2. Computing p(x) Given an output sequence x = (x1,…,xL), Compute the probability that this sequence was generated: The summation taken over all state-paths s generating x.

  19. ? ? si X1 X2 Xi Forward algorithm for computing p(x) The task: compute Idea: for i=1,…,L and for each state l, compute: fl(i) = p(x1,…,xi;si=l ), the probability of all the paths which emit (x1,..,xi) and end in state si=l. Use the recursive formula:

  20. Result: p(x1,…,xL) = Forward algorithm for computing p(x) s1 s2 sL-1 sL si 0 X1 X2 XL-1 XL Xi Similar to Viterbi’s algorithm: Initialization: f0(0) := 1 , fk(0) := 0 for k>0 For i=1 to L do for each state l : fl(i) = el(xi) ∑kfk(i-1)akl

  21. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL 3. The distribution of Si, given x Given an output sequence x = (x1,…,xL), Compute for each i=1,…,l and for each state k the probability that si = k. This helps to reply queries like: what is the probability that si is in a CpG island, etc.

  22. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Solution in two stages • For each i and each state k, compute p(si=k | x1,…,xL). 2. Do the same computation for every i = 1,..,L but without repeating the first task L times.

  23. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Computing for a single i:

  24. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Decomposing the computation P(x1,…,xL,si) = P(x1,…,xi,si) P(xi+1,…,xL| x1,…,xi,si) (by the equality p(A,B) = p(A)p(B|A ). P(x1,…,xi,si)=fsi(i) ≡ F(si), so we are left with the task to compute P(xi+1,…,xL| x1,…,xi,si) ≡B(si)

  25. Decomposing the computation s1 s2 Si+1 sL si X1 X2 Xi+1 XL Xi Exercise: Show from the definitions of Markov Chain and Hidden Markov Chain that: P(xi+1,…,xL| x1,…,xi,si) = P(xi+1,…,xL| si) Denote P(xi+1,…,xL| si) ≡ B(si).

  26. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Decomposing the computation Summary: P(x1,…,xL,si) = P(x1,…,xi,si) P(xi+1,…,xL| x1,…,xi,si) =P(x1,…,xi,si) P(xi+1,…,xL | si) ≡ F(si)·B(si) Equality due to independence of {xi+1,…,xL}, and {x1,…,xi} | si} – by the Exercise.

  27. F(si): The Forward algorithm: s1 s2 sL-1 sL si 0 X1 X2 XL-1 XL Xi Initialization: F(0) = 1 For i=1 to L do for each state l : F(si) = esi(xi)·∑si-1F(si-1)asi-1si The algorithm computes F(si) = P(x1,…,xi,si) for i=1,…,L (namely, considering evidence up to time slot i).

  28. Si Si+1 SL-1 SL Xi+1 XL-1 XL {step i: compute B(si) from B(si+1)} P(xi+1,…,xL|si) = P(si+1 | si) P(xi+1 | si+1) P(xi+2,…,xL| si+1) si+1 B(si) B(si+1) B(si): The backward algorithm The task: Compute B(si) = P(xi+1,…,xL|si) for i=L-1,…,1 (namely, considering evidence after time slot i). {first step, step L-1: Compute B(sL-1).} P(xL| sL-1) = sLP(xL,sL|sL-1) = sL P(sL|sL-1) P(xL|sL)

  29. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi The combined answer 1. To compute the probability that Si=sigiven that {x1,…,xL} run the forward algorithm and compute F(si) = P(x1,…,xi,si), run the backward algorithm to compute B(si) = P(xi+1,…,xL|si), the product F(si)B(si) is the answer (for every possible value si). 2. To compute these probabilities for every si simply run the forward and backward algorithms once, storing F(si) and B(si) for every i (and every value of si). Compute F(si)B(si) for every i.

  30. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Time and Space Complexity of the forward/backward algorithms Time complexity is O(m2L) where m is the number of states. It is linear in the length of the chain, provided the number of states is a constant. Space complexity is also O(m2L).

More Related