1 / 49

Natural Language Processing Toward HMMs

Natural Language Processing Toward HMMs. Meeting 10, Oct 2, 2012 Rodney Nielsen Several of these slides were borrowed or adapted from James Martin. Penn TreeBank POS Tagset. POS Tagging. Words often have more than one POS: back The back door = JJ On my back = NN

hashim
Download Presentation

Natural Language Processing Toward HMMs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Natural Language ProcessingToward HMMs Meeting 10, Oct 2, 2012 Rodney Nielsen Several of these slides were borrowed or adapted from James Martin

  2. Penn TreeBank POS Tagset

  3. POS Tagging • Words often have more than one POS: back • The back door = JJ • On my back = NN • Win the voters back = RB • Promised to back the bill = VB • The POS tagging problem is to determine the POS tag for a particular instance of a word. These examples from Dekang Lin

  4. How Hard is POS Tagging? Measuring Ambiguity

  5. Two Methods for POS Tagging • Rule-based tagging • (ENGTWOL; Section 5.4) • Stochastic • Probabilistic sequence models • HMM (Hidden Markov Model) tagging • MEMMs (Maximum Entropy Markov Models)

  6. Hidden Markov Model Tagging • Using an HMM to do POS tagging is a special case of Bayesian inference • Foundational work in computational linguistics • It is also related to the “noisy channel” model that’s the basis for ASR, OCR and MT

  7. POS Tagging as Sequence Classification • We are given a sentence (an “observation” or “sequence of observations”) • Secretariat is expected to race tomorrow • What is the best sequence of tags that corresponds to this sequence of observations? • Probabilistic view • Consider all possible sequences of tags • Out of this universe of sequences, choose the tag sequence which is most probable given the observation sequence of n words w1…wn.

  8. Two Views • Decoding view • Consider all possible sequences of tags • Out of this universe of sequences, choose the tag sequence which is most probable given the observation sequence of n words w1…wn. • Generative view • This sequence of words must have resulted from some hidden process • A sequence of tags (states) each of which emitted a word

  9. Getting to HMMs • We want, out of all sequences of n tags t1…tn the single tag sequence such that P(t1…tn|w1…wn) is highest. • Hat ^ means “our estimate of the best one” • Argmaxx f(x) means “the x such that f(x) is maximized”

  10. Getting to HMMs • This equation is guaranteed to give us the best tag sequence • But how to make it operational? How to compute this value? • Intuition of Bayesian inference: • Use Bayes rule to transform this equation into a set of other probabilities that are easier to compute

  11. Using Bayes Rule Know this.

  12. Likelihood and Prior

  13. Two Kinds of Probabilities • Tag transition probabilities p(ti|ti-1) • Determiners likely to precede adjs and nouns • That/DT flight/NN • The/DT yellow/JJ hat/NN • So we expect P(NN|DT) and P(JJ|DT) to be high • But P(DT|JJ) to be: • Compute P(NN|DT) by counting in a labeled corpus:

  14. Two Kinds of Probabilities • Word likelihood probabilities p(wi|ti) • VBZ (3sg Pres verb) likely to be “is” • Compute P(is|VBZ) by counting in a labeled corpus:

  15. Example: The Verb “race” • Secretariat/NNP is/VBZ expected/VBN to/TOrace/VB tomorrow/NR • People/NNS continue/VB to/TO inquire/VB the/DT reason/NN for/IN the/DTrace/NN for/IN outer/JJ space/NN • How do we pick the right tag?

  16. Disambiguating “race”

  17. Example • P(NN|TO) = .00047 • P(VB|TO) = .83 • P(race|NN) = .00057 • P(race|VB) = .00012 • P(NR|VB) = .0027 • P(NR|NN) = .0012 • P(VB|TO)P(NR|VB)P(race|VB) = .00000027 • P(NN|TO)P(NR|NN)P(race|NN)=.00000000032 • So we (correctly) choose the verb reading

  18. Hidden Markov Models • What we’ve described with these two kinds of probabilities is a Hidden Markov Model (HMM) • This is a generativemodel. • There is a hidden underlying generator of observable events • The hidden generator can be modeled as a set of states • We want to infer the underlying state sequence from the observed events

  19. Definitions • A weighted finite-state automaton adds probabilities to the arcs • The sum of the probabilities leaving any arc must sum to one • A Markov chain is a special case of a WFST in which the input sequence uniquely determines which states the automaton will go through • Markov chains can’t represent inherently ambiguous problems • Useful for assigning probabilities to unambiguous sequences

  20. Markov Chain for Weather

  21. Markov Chain for Words

  22. Markov Chain: “First-Order Observable Markov Model” • A set of states • Q = q1, q2…qN; the state at time t is qt • Transition probabilities: • a set of probabilities A = a01a02…an1…ann. • Each aij represents the probability of transitioning from state i to state j • The set of these is the transition probability matrix A • Current state only depends on previous state

  23. Markov Chain for Weather • What is the probability of 4 consecutive rainy days? • Sequence is rainy-rainy-rainy-rainy • That is, state sequence is 3-3-3-3 • P(3,3,3,3) = 3a33a33a33

  24. HMM for Ice Cream • You are a climatologist in the year 2799 studying global warming • You can’t find any records of the weather in Baltimore, MA for summer of 2007 • But you find Jason Eisner’s diary which lists how many ice-creams Jason ate every date that summer • Your job: figure out how hot it was each day

  25. Hidden Markov Model • For Markov chains, the output symbols are the same as the states. • See hot weather: we’re in state hot • But in part-of-speech tagging (and many other tasks) the symbols do not uniquely determine the states (and the other way around). • The output symbols are words • But the hidden states are part-of-speech tags • A Hidden Markov Model is an extension of a Markov chain in which the output symbols are not the same as the states. • This means we don’t necessarily know which state we are in.

  26. Hidden Markov Models • States Q = q1, q2…qN; • Observations O= o1, o2…oN; • Each observation is a symbol from a vocabulary V = {v1,v2,…vV} • Transition probabilities • Transition probability matrix A = {aij} • Observation likelihoods • Output probability matrix B={bi(k)} • Special initial probability vector 

  27. Eisner Task • Given • Ice Cream Observation Sequence: 1,2,3,2,2,2,3… • Produce: • Weather Sequence: H,C,H,H,H,C,H…

  28. HMM for Ice Cream

  29. Ice Cream HMM • Let’s just do 131 as the sequence • How many underlying state (hot/cold) sequences are there? • How do you pick the right one? HHH HHC HCH HCC CCC CCH CHC CHH ArgmaxP(sequence | 1 3 1)

  30. Ice Cream HMM Let’s just do 1 sequence: CHC Cold as the initial state P(Cold|Start) .2 Observing a 1 on a cold day P(1 | Cold) .5 Hot as the next state P(Hot | Cold) .4 Observing a 3 on a hot day P(3 | Hot) .4 .0024 Cold as the next state P(Cold|Hot) .3 Observing a 1 on a cold day P(1 | Cold) .5

  31. POS Transition Probabilities

  32. Observation Likelihoods

  33. Decoding • Ok, now we have a complete model that can give us what we need. Recall that we need to get • We could just enumerate all paths given the input and use the model to assign probabilities to each. • If there are 36 tags in the Penn set • And the average sentence is 23 words... • How many tag sequences do we have to enumerate to argmax over? • Not a good idea. 3623

  34. Decoding • Ok, now we have a complete model that can give us what we need. Recall that we need to get • We could just enumerate all paths given the input and use the model to assign probabilities to each. • Dynamic programming • (last seen in Ch. 3 with minimum edit distance)

  35. Intuition • You’re interested in the shortest distance from here to Moab • Consider a possible location on the way to Moab, say Vail. • What do you need to know about all the possible ways to get to Vail (on the way to Moab?)

  36. Intuition • Consider a state sequence (tag sequence) that ends at state j with a particular tag T. • The probability of that tag sequence can be broken into two parts • The probability of the BEST tag sequence up through j-1 • Multiplied by the transition probability from the tag at the end of the j-1 sequence to T. • And the observation probability of the word given tag T.

  37. The Viterbi Algorithm

  38. Viterbi Example

  39. Viterbi Summary • Create an array • With columns corresponding to inputs • Rows corresponding to possible states • Sweep through the array in one pass filling the columns left to right using our transition probs and observations probs • Dynamic programming key is that we need only store the MAX prob path to each cell, (not all paths).

  40. Evaluation • So once you have your POS tagger running how do you evaluate it? • Overall error rate with respect to a gold-standard test set. • Error rates on particular tags • Error rates on particular words • Tag confusions...

  41. Error Analysis • Look at a confusion matrix • See what errors are causing problems • Noun (NN) vs ProperNoun (NNP) vs Adj (JJ) • Preterite (VBD) vs Participle (VBN) vs Adjective (JJ)

  42. Evaluation • The result is compared with a manually coded “Gold Standard” • Typically accuracy reaches 96-97% • This may be compared with result for a baseline tagger (one that uses no context). • Important: 100% is impossible even for human annotators.

  43. Project Discussion

  44. Hidden Markov Models • States Q = q1, q2…qN; • Observations O= o1, o2…oN; • Each observation is a symbol from a vocabulary V = {v1,v2,…vV} • Transition probabilities • Transition probability matrix A = {aij} • Observation likelihoods • Output probability matrix B={bi(k)} • Special initial probability vector 

  45. 3 Problems • Given this framework there are 3 problems that we can pose to an HMM • Given an observation sequence and a model, what is the probability of that sequence? • Given an observation sequence and a model, what is the most likely state sequence? • Given an observation sequence, infer the best model parameters for a skeletal model

  46. Problem 1 • The probability of a sequence given a model... • Used in model development... How do I know if some change I made to the model is making it better • And in classification tasks • Word spotting in ASR, language identification, speaker identification, author identification, etc. • Train one HMM model per class • Given an observation, pass it to each model and compute P(seq|model).

  47. Problem 2 • Most probable state sequence given a model and an observation sequence • Typically used in tagging problems, where the tags correspond to hidden states • As we’ll see almost any problem can be cast as a sequence labeling problem • Viterbi solves problem 2

  48. Problem 3 • Infer the best model parameters, given a skeletal model and an observation sequence... • That is, fill in the A and B tables with the right numbers... • The numbers that make the observation sequence most likely • Useful for getting an HMM without having to hire annotators...

  49. ended here • ended here

More Related