1 / 39

I256: Applied Natural Language Processing

I256: Applied Natural Language Processing. Marti Hearst Sept 20, 2006. Tagging methods. Hand-coded Statistical taggers Brill (transformation-based) tagger. Type of taggers: tag.Default() tag.Regexp() tag.Affix() tag.Unigram() tag.Bigram() tag.Trigram(). Actions: tag.tag()

yen
Download Presentation

I256: Applied Natural Language Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. I256: Applied Natural Language Processing Marti Hearst Sept 20, 2006

  2. Tagging methods • Hand-coded • Statistical taggers • Brill (transformation-based) tagger

  3. Type of taggers: tag.Default() tag.Regexp() tag.Affix() tag.Unigram() tag.Bigram() tag.Trigram() Actions: tag.tag() tag.tagsents() tag.untag() tag.train() tag.accuracy() tag.tag2tuple() tag.string2words() tag.string2tags() nltk_lite tag package

  4. Hand-coded Tagger • Make up some regexp rules that make use of morphology

  5. Compare to Brown tags

  6. Training and Testing ofLearning Algorithms • Algorithms that “learn” from data see a set of examples and try to generalize from them. • Training set: • Examples trained on • Test set: • Also called held-out data and unseen data • Use this for evaluating your algorithm • Must be separate from the training set • Otherwise, you cheated! • “Gold” standard • A test set that a community has agreed on and uses as a common benchmark.

  7. Cross-Validation of Learning Algorithms • Cross-validation set • Part of the training set. • Used for tuning parameters of the algorithm without “polluting” (tuning to) the test data. • You can train on x%, and then cross-validate on the remaining 1-x% • E.g., train on 90% of the training data, cross-validate (test) on the remaining 10% • Repeat several times with different splits • This allows you to choose the best settings to then use on the real test set. • You should only evaluate on the test set at the very end, after you’ve gotten your algorithm as good as possible on the cross-validation set.

  8. Strong Baselines • When designing NLP algorithms, you need to evaluate them by comparing to others. • Baseline Algorithm: • An algorithm that is relatively simple but can be expected to do well • Should get the best score possible by doing the somewhat obvious thing.

  9. A Tagging Baseline • Find the most likely tag for the most frequent words • Frequent words are ambiguous • You’re likely to see frequent words in any collection • Will always see “to” but might not see “armadillo” • How to do this? • First find the most likely words and their tags in the training data • Train a tagger that looks up these results in a table • Note that the tag.Lookup() tagger type is not defined in this version of nltk_lite, so we’ll write our own.

  10. Find the most frequent words and their tags

  11. Subclassing a Python Class • The Lookup module isn’t in our version of nltk_lite • Let’s make a subclass of the tag.Unigram class that has this functionality.

  12. Details from the Unigram class

  13. Define our own tagger class

  14. Use our own tagger class

  15. N-Grams • The N stands for how many terms are used • Unigram: 1 term (0th order) • Bigram: 2 terms (1st order) • Trigrams: 3 terms (2nd order) • Usually don’t go beyond this • You can use different kinds of terms, e.g.: • Character based n-grams • Word-based n-grams • POS-based n-grams • Ordering • Often adjacent, but not required • We use n-grams to help determine the context in which some linguistic phenomenon happens. • E.g., look at the words before and after the period to see if it is the end of a sentence or not.

  16. Tagging with lexical frequencies • Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN • People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN • Problem: assign a tag to race given its lexical frequency • Solution: we choose the tag that has the greater • P(race|VB) • P(race|NN) Modified from Massio Poesio's lecture

  17. Unigram Tagger • Train on a set of sentences • Keep track of how many times each word is seen with each tag. • After training, associate with each word its most likely tag. • Problem: many words never seen in the training data. • Solution: have a default tag to “backoff” to.

  18. Unigram tagger with Backoff

  19. What’s wrong with unigram? • Most frequent tag isn’t always right! • Need to take the context into account • Which sense of “to” is being used? • Which sense of “like” is being used?

  20. N-gram tagger • Uses the preceding N-1 predicted tags • Also uses the unigram estimate for the current word

  21. N-gram taggers in nltk_lite • Constructs a frequency distribution describing the frequencies each word is tagged with in different contexts. • The context considered consists of the word to be tagged and the n-1 previous words' tags. • After training, tag words by assigning each word the tag with the maximum frequency given its context. • Assigns “None” tag if it sees a word in a context for which it has no data (which it has not seen). • Tuning parameters • “cutoff” is the minimal number of times that the context must have been seen in training in order to be incorporated into the statistics • Default cutoff is 1

  22. Bigram Tagging • For tagging, in addition to considering the token’s type, the context also considers the tags of the n preceding tokens • What is the most likely tag for word n, given word n-1 and tag n-1? • The tagger picks the tag which is most likely for that context. Modified from Diane Litman's version of Steve Bird's notes

  23. The current word The previously seen tag The predicted POS Reading the Bigram table

  24. Combining Taggers using Backoff • Use more accurate algorithms when we can, backoff to wider coverage when needed. • Try tagging the token with the 1st order tagger. • If the 1st order tagger is unable to find a tag for the token, try finding a tag with the 0th order tagger. • If the 0th order tagger is also unable to find a tag, use the default tagger to find a tag. • Important point: • Bigram and trigram taggers use the previous tag context to assign new tags. If they see a tag of “None” in the previous context, they will output “None” too. Modified from Diane Litman's version of Steve Bird's notes

  25. Demonstrating the n-gram taggers • Trained on brown.tagged(‘a’), tested on brown.tagged(‘b’) • Backs off to a default of ‘nn’

  26. Demonstrating the n-gram taggers

  27. Combining Taggers • The bigram backoff tagger did worse than the unigram! Why? • Why does it get better again with trigrams? • How can we improve these scores?

  28. Rule-Based Tagger • The Linguistic Complaint • Where is the linguistic knowledge of a tagger? • Just a massive table of numbers • Aren’t there any linguistic insights that could emerge from the data? • Could thus use handcrafted sets of rules to tag input sentences, for example, if input follows a determiner tag it as a noun. Modified from Diane Litman's version of Steve Bird's notes

  29. The Brill tagger • An example of Transformation-Based Learning • Basic idea: do a quick job first (using frequency), then revise it using contextual rules. • Painting metaphor from the readings • Very popular (freely available, works fairly well) • A supervised method: requires a tagged corpus Slide modified from Massimo Poesio's

  30. Brill Tagging: In more detail • Start with simple (less accurate) rules…learn better ones from tagged corpus • Tag each word initially with most likely POS • Examine set of transformationsto see which improves tagging decisions compared to tagged corpus • Re-tag corpus using best transformation • Repeat until, e.g., performance doesn’t improve • Result: tagging procedure (ordered list of transformations) which can be applied to new, untagged text

  31. An example • Examples: • They are expected to racetomorrow. • Therace for outer space. • Tagging algorithm: • Tag all uses of “race” as NN (most likely tag in the Brown corpus) • They are expected to race/NN tomorrow • the race/NN for outer space • Use a transformation rule to replace the tag NN with VB for all uses of “race” preceded by the tag TO: • They are expected to race/VB tomorrow • the race/NN for outer space Slide modified from Massimo Poesio's

  32. Example Rule Transformations

  33. Sample Final Rules

  34. Error Analysis • To improve your algorithm, examine where it fails on the cross-validation set • It’s often useful to characterize in detail which examples it fails on and which succeed. • Make fixes, and then re-train on the training set, again using cross-validation

  35. Error Analysis

  36. Error Analysis

  37. Assignment + Next Time • I’ve posted an assignment, due in a week • Work in pairs, but only if you work together • In your writeup, make clear who did what, and what you did together • Next week: shallow parsing

More Related