1 / 8

Part of Speech Tagging with MaxEnt Re-ranked Hidden Markov Model

Part of Speech Tagging with MaxEnt Re-ranked Hidden Markov Model. Brian Highfill. Part of Speech Tagging. Train a model on a set of hand-tagged sentences Find best sequence of POS tags for new sentence Generative Models Hidden Markov Model HMM Discriminative Models

chyna
Download Presentation

Part of Speech Tagging with MaxEnt Re-ranked Hidden Markov Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Part of Speech Tagging with MaxEnt Re-ranked Hidden Markov Model Brian Highfill

  2. Part of Speech Tagging • Train a model on a set of hand-tagged sentences • Find best sequence of POS tags for new sentence • Generative Models • Hidden Markov Model HMM • Discriminative Models • Maximum Entropy Markov Model (MEMM) • Brown Corpus • ~57,000 tagged sentences • 87 tags (reduced to 45 for Penn TreeBank tagging) • ~300 tags including compound tags • that_DT fire's_NN+BEZ too_QL big_JJ ._. • “fire’s” = fire_NN is_BEZ

  3. Hidden Markov Models • Set of hidden states (POS tags) • Set of observations (word tokens) • Dependents ONLY on current tag • HMM parameters • Transition probabilities : P(ti|t0…ti) = P(ti|ti-1) • Observation probabilities: P(wi|t0…tn,w0…wn) = P(wi|ti) • Initial tag distribution: P(t0)

  4. HMM Best Tag Sequence • For HMM, the Viterbi algorithm finds the most probable tagging for a new sentence • For re-ranking later, we want not the best tagging but the k best tagging for each sentence

  5. HMM Beam Search • Step1 • Enumerate all possible tags for the first word • Step 2 • Evaluate each tagging using trained HMM keep only the best k (first word sentence taggings) • Step 3 • For each of the k taggings of the previous step, enumerate all possible tags for the second word • Step 4 • Evaluate each two-word sentence tagging and discard all the k best. • Repeat for all words in the sentence

  6. MaxEnt Re-ranking • After beam search, we have the k “best” taggings for our sentence • Use trained MaxEnt model to select most probable sequence of tags

  7. Results • Feature • Current word • Previous tag • <Word AND previous Tag> • Word contains a numeral • “-ing” • “-ness” • “-ity” • “-ed” • “-able” • “-s” • “-ion” • “-al” • “-ive” • “-ly” • Word is capitalized • Word is hyphenated • Word is all uppercase • Word is all uppercase with a numeral • Word is capitalized and a word ending in “Co.” or “Inc.” is found within 3 words ahead

  8. Results • Baseline “Most frequent class tagger”: 73.41% (24%) • HMM Viterbi tagger: 92.96% (32.76% on <UNK>)

More Related