I256 applied natural language processing
Download
1 / 39

I256: Applied Natural Language Processing - PowerPoint PPT Presentation


  • 161 Views
  • Updated On :

I256: Applied Natural Language Processing. Marti Hearst Sept 20, 2006. Tagging methods. Hand-coded Statistical taggers Brill (transformation-based) tagger. Type of taggers: tag.Default() tag.Regexp() tag.Affix() tag.Unigram() tag.Bigram() tag.Trigram(). Actions: tag.tag()

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'I256: Applied Natural Language Processing' - yen


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
I256 applied natural language processing l.jpg
I256: Applied Natural Language Processing

Marti Hearst

Sept 20, 2006


Tagging methods l.jpg
Tagging methods

  • Hand-coded

  • Statistical taggers

  • Brill (transformation-based) tagger


Nltk lite tag package l.jpg

Type of taggers:

tag.Default()

tag.Regexp()

tag.Affix()

tag.Unigram()

tag.Bigram()

tag.Trigram()

Actions:

tag.tag()

tag.tagsents()

tag.untag()

tag.train()

tag.accuracy()

tag.tag2tuple()

tag.string2words()

tag.string2tags()

nltk_lite tag package


Hand coded tagger l.jpg
Hand-coded Tagger

  • Make up some regexp rules that make use of morphology



Training and testing of learning algorithms l.jpg
Training and Testing ofLearning Algorithms

  • Algorithms that “learn” from data see a set of examples and try to generalize from them.

  • Training set:

    • Examples trained on

  • Test set:

    • Also called held-out data and unseen data

    • Use this for evaluating your algorithm

    • Must be separate from the training set

      • Otherwise, you cheated!

  • “Gold” standard

    • A test set that a community has agreed on and uses as a common benchmark.


Cross validation of learning algorithms l.jpg
Cross-Validation of Learning Algorithms

  • Cross-validation set

    • Part of the training set.

  • Used for tuning parameters of the algorithm without “polluting” (tuning to) the test data.

    • You can train on x%, and then cross-validate on the remaining 1-x%

      • E.g., train on 90% of the training data, cross-validate (test) on the remaining 10%

      • Repeat several times with different splits

    • This allows you to choose the best settings to then use on the real test set.

      • You should only evaluate on the test set at the very end, after you’ve gotten your algorithm as good as possible on the cross-validation set.


Strong baselines l.jpg
Strong Baselines

  • When designing NLP algorithms, you need to evaluate them by comparing to others.

  • Baseline Algorithm:

    • An algorithm that is relatively simple but can be expected to do well

    • Should get the best score possible by doing the somewhat obvious thing.


A tagging baseline l.jpg
A Tagging Baseline

  • Find the most likely tag for the most frequent words

    • Frequent words are ambiguous

    • You’re likely to see frequent words in any collection

      • Will always see “to” but might not see “armadillo”

  • How to do this?

    • First find the most likely words and their tags in the training data

    • Train a tagger that looks up these results in a table

      • Note that the tag.Lookup() tagger type is not defined in this version of nltk_lite, so we’ll write our own.



Subclassing a python class l.jpg
Subclassing a Python Class

  • The Lookup module isn’t in our version of nltk_lite

  • Let’s make a subclass of the tag.Unigram class that has this functionality.





N grams l.jpg
N-Grams

  • The N stands for how many terms are used

    • Unigram: 1 term (0th order)

    • Bigram: 2 terms (1st order)

    • Trigrams: 3 terms (2nd order)

      • Usually don’t go beyond this

  • You can use different kinds of terms, e.g.:

    • Character based n-grams

    • Word-based n-grams

    • POS-based n-grams

  • Ordering

    • Often adjacent, but not required

  • We use n-grams to help determine the context in which some linguistic phenomenon happens.

    • E.g., look at the words before and after the period to see if it is the end of a sentence or not.


Tagging with lexical frequencies l.jpg
Tagging with lexical frequencies

  • Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN

  • People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN

  • Problem: assign a tag to race given its lexical frequency

  • Solution: we choose the tag that has the greater

    • P(race|VB)

    • P(race|NN)

Modified from Massio Poesio's lecture


Unigram tagger l.jpg
Unigram Tagger

  • Train on a set of sentences

  • Keep track of how many times each word is seen with each tag.

  • After training, associate with each word its most likely tag.

    • Problem: many words never seen in the training data.

    • Solution: have a default tag to “backoff” to.



What s wrong with unigram l.jpg
What’s wrong with unigram?

  • Most frequent tag isn’t always right!

  • Need to take the context into account

    • Which sense of “to” is being used?

    • Which sense of “like” is being used?


N gram tagger l.jpg
N-gram tagger

  • Uses the preceding N-1 predicted tags

  • Also uses the unigram estimate for the current word


N gram taggers in nltk lite l.jpg
N-gram taggers in nltk_lite

  • Constructs a frequency distribution describing the frequencies each word is tagged with in different contexts.

    • The context considered consists of the word to be tagged and the n-1 previous words' tags.

  • After training, tag words by assigning each word the tag with the maximum frequency given its context.

    • Assigns “None” tag if it sees a word in a context for which it has no data (which it has not seen).

  • Tuning parameters

    • “cutoff” is the minimal number of times that the context must have been seen in training in order to be incorporated into the statistics

    • Default cutoff is 1


Bigram tagging l.jpg
Bigram Tagging

  • For tagging, in addition to considering the token’s type, the context also considers the tags of the n preceding tokens

    • What is the most likely tag for word n, given word n-1 and tag n-1?

  • The tagger picks the tag which is most likely for that context.

Modified from Diane Litman's version of Steve Bird's notes


Reading the bigram table l.jpg

The current word

The previously seen tag

The predicted POS

Reading the Bigram table


Combining taggers using backoff l.jpg
Combining Taggers using Backoff

  • Use more accurate algorithms when we can, backoff to wider coverage when needed.

    • Try tagging the token with the 1st order tagger.

    • If the 1st order tagger is unable to find a tag for the token, try finding a tag with the 0th order tagger.

    • If the 0th order tagger is also unable to find a tag, use the default tagger to find a tag.

  • Important point:

    • Bigram and trigram taggers use the previous tag context to assign new tags. If they see a tag of “None” in the previous context, they will output “None” too.

Modified from Diane Litman's version of Steve Bird's notes


Demonstrating the n gram taggers l.jpg
Demonstrating the n-gram taggers

  • Trained on brown.tagged(‘a’), tested on brown.tagged(‘b’)

  • Backs off to a default of ‘nn’



Combining taggers l.jpg
Combining Taggers

  • The bigram backoff tagger did worse than the unigram! Why?

  • Why does it get better again with trigrams?

  • How can we improve these scores?


Rule based tagger l.jpg
Rule-Based Tagger

  • The Linguistic Complaint

    • Where is the linguistic knowledge of a tagger?

    • Just a massive table of numbers

    • Aren’t there any linguistic insights that could emerge from the data?

    • Could thus use handcrafted sets of rules to tag input sentences, for example, if input follows a determiner tag it as a noun.

Modified from Diane Litman's version of Steve Bird's notes


The brill tagger l.jpg
The Brill tagger

  • An example of Transformation-Based Learning

    • Basic idea: do a quick job first (using frequency), then revise it using contextual rules.

    • Painting metaphor from the readings

  • Very popular (freely available, works fairly well)

  • A supervised method: requires a tagged corpus

Slide modified from Massimo Poesio's


Brill tagging in more detail l.jpg
Brill Tagging: In more detail

  • Start with simple (less accurate) rules…learn better ones from tagged corpus

    • Tag each word initially with most likely POS

    • Examine set of transformationsto see which improves tagging decisions compared to tagged corpus

    • Re-tag corpus using best transformation

    • Repeat until, e.g., performance doesn’t improve

    • Result: tagging procedure (ordered list of transformations) which can be applied to new, untagged text


An example l.jpg
An example

  • Examples:

    • They are expected to racetomorrow.

    • Therace for outer space.

  • Tagging algorithm:

    • Tag all uses of “race” as NN (most likely tag in the Brown corpus)

      • They are expected to race/NN tomorrow

      • the race/NN for outer space

    • Use a transformation rule to replace the tag NN with VB for all uses of “race” preceded by the tag TO:

      • They are expected to race/VB tomorrow

      • the race/NN for outer space

Slide modified from Massimo Poesio's




Error analysis l.jpg
Error Analysis

  • To improve your algorithm, examine where it fails on the cross-validation set

    • It’s often useful to characterize in detail which examples it fails on and which succeed.

  • Make fixes, and then re-train on the training set, again using cross-validation




Assignment next time l.jpg
Assignment + Next Time

  • I’ve posted an assignment, due in a week

  • Work in pairs, but only if you work together

    • In your writeup, make clear who did what, and what you did together

  • Next week: shallow parsing


ad