- 235 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Hidden Markov Models David Meir Blei November 1, 1999' - lincoln

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Graphical Model

Circles indicate states

Arrows indicate probabilistic dependencies between states

What is an HMM?Green circles are hidden states

Dependent only on the previous state

“The past is independent of the future given the present.”

What is an HMM?{S, K, P, A, B}

S : {s1…sN } are the values for the hidden states

K : {k1…kM } are the values for the observations

HMM FormalismS

S

S

S

S

K

K

K

K

K

{S, K, P, A, B}

P = {pi} are the initial state probabilities

A = {aij} are the state transition probabilities

B = {bik} are the observation state probabilities

HMM FormalismA

A

A

A

S

S

S

S

S

B

B

B

K

K

K

K

K

Compute the probability of a given observation sequence

Given an observation sequence, compute the most likely hidden state sequence

Given an observation sequence and set of possible models, which model most closely fits the data?

Inference in an HMMxt-1

xt

xt+1

xT

o1

ot-1

ot

ot+1

oT

Forward Procedure- Special structure gives us an efficient solution using dynamic programming.
- Intuition: Probability of the first t observations is the same for all possible t+1 length state sequences.
- Define:

Backward Procedure

x1

xt-1

xt

xt+1

xT

o1

ot-1

ot

ot+1

oT

Probability of the rest of the states given the first state

Find the state sequence that best explains the observations

Viterbi algorithm

o1

ot-1

ot

ot+1

oT

Best State SequenceViterbi Algorithm

x1

xt-1

j

o1

ot-1

ot

ot+1

oT

The state sequence which maximizes the probability of seeing the observations to time t-1, landing in state j, and seeing the observation at time t

ot-1

ot

ot+1

oT

Viterbi Algorithmx1

xt-1

xt

xt+1

xT

Compute the most likely state sequence by working backwards

ot-1

ot

ot+1

oT

Parameter EstimationA

A

A

A

B

B

B

B

B

- Given an observation sequence, find the model that is most likely to produce that sequence.
- No analytic method
- Given a model and observation sequence, update the model parameters to better fit the observations.

ot-1

ot

ot+1

oT

Parameter EstimationA

A

A

A

B

B

B

B

B

Probability of traversing an arc

Probability of being in state i

ot-1

ot

ot+1

oT

Parameter EstimationA

A

A

A

B

B

B

B

B

Now we can compute the new estimates of the model parameters.

ot-1

ot

ot+1

oT

The Most Important ThingA

A

A

A

B

B

B

B

B

We can use the special structure of this model to do a lot of neat math and solve problems that are otherwise not solvable.

Download Presentation

Connecting to Server..