combining lexical and syntactic features for supervised word sense disambiguation n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation PowerPoint Presentation
Download Presentation
Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation

Loading in 2 Seconds...

play fullscreen
1 / 28

Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation - PowerPoint PPT Presentation


  • 70 Views
  • Uploaded on

Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation. Saif Mohammad Ted Pedersen University of Toronto University of Minnesota http//:www.cs.toronto.edu/~smm http//:www.d.umn.edu/~tpederse.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation' - luke


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
combining lexical and syntactic features for supervised word sense disambiguation

Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation

Saif Mohammad Ted Pedersen

University of Toronto University of Minnesota

http//:www.cs.toronto.edu/~smm http//:www.d.umn.edu/~tpederse

word sense disambiguation
Word Sense Disambiguation

Harry cast a bewitching spell

  • Humans immediately understand spellto mean a charmor incantation.
    • readingout letter by letter or a period of time ?
      • Words with multiple senses – polysemy, ambiguity!
    • Utilize background knowledge and context.
  • Machines lack background knowledge.
    • Automatically identifying the intended sense of a word in written text, based on its context, remains a hard problem.
    • Best accuracies in recent international event, around 65%.
why do we need wsd
Why do we need WSD !
  • Information Retrieval
    • Query: cricket bat
      • Documents pertaining to the insect and the mammal, irrelevant.
  • Machine Translation
    • Consider English to Hindi translation.
      • head to sar (upper part of the body) or adhyaksh (leader)?
  • Machine-human interaction
    • Instructions to machines.
      • Interactive home system: turn on the lights
      • Domestic Android: get the door

Applications are widespread and will affect our way of life.

terminology
Terminology

Harry cast a bewitching spell

  • Target word – the word whose intended sense is to be identified.
    • spell
  • Context – the sentence housing the target word and possibly, 1 or 2 sentences around it.
    • Harry cast a bewitching spell
  • Instance – target word along with its context.

WSD is a classification problem wherein the occurrence of the

target word is assigned to one of its many possible senses.

corpus based supervised machine learning
Corpus-Based Supervised Machine Learning

A computer program is said to learn from experience … if its performance at tasks … improves with experience.

- Mitchell

  • Task : Word Sense Disambiguation of given test instances.
  • Performance : Ratio of instances correctly disambiguated to the total test instances – accuracy.
  • Experience : Manually created instances such that target words are marked with intended sense – training instances.

Harry cast a bewitching spell / incantation

decision trees
Decision Trees
  • A kind of classifier.
    • Assigns a class by asking a series of questions.
    • Questions correspond to features of the instance.
    • Question asked depends on answer to previous question.
  • Inverted tree structure.
    • Interconnected nodes.
      • Top most node is called the root.
    • Each node corresponds to a question / feature.
    • Each possible value of feature has corresponding branch.
    • Leaves terminate every path from root.
      • Each leaf is associated with a class.
wsd tree
WSD Tree

Feature 1 ?

1

0

Feature 2 ?

Feature 4?

0

1

0

1

Feature 4 ?

SENSE 3

Feature 2 ?

SENSE 1

0

1

0

1

SENSE 1

SENSE 4

SENSE 3

Feature 3 ?

0

1

SENSE 2

SENSE 3

choice of learning algorithm
Choice of Learning Algorithm
  • Why use decision trees for WSD ?
    • It has drawbacks – training data fragmentation
    • What about other learning algorithms such as neural networks?
  • Context is a rich source of discrete features.
  • The learned model likely meaningful.
    • May provide insight into the interaction of features.

Pedersen[2001]*: Choosing the right features is of

greater significance than the learning algorithm itself

A Decision Tree of Bigrams is an Accurate Predictor of Word Sense T. Pedersen, In the Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics

(NAACL-01), June 2-7, 2001, Pittsburgh, PA.

lexical features
Lexical Features
  • Surface form
    • A word we observe in text.
    • Case(n)
      • 1. Object of investigation 2. frame or covering 3. A weird person
      • Surface forms : case, cases, casing
      • An occurrence of casing suggests sense 2.
  • Unigrams and Bigrams
    • One word and two word sequences in text.

The interest rate is low

    • Unigrams: the, interest, rate, is, low
    • Bigrams: the interest, interest rate, rate is, is low
part of speech tagging
Part of Speech Tagging
  • Brill Tagger – most widely used tool.
    • Accuracy around 95%.
    • Source code available.
    • Easily understood rules.
  • Pre-tagging is the act of manually assigning tags to selected words in a text prior to tagging.
    • Brill tagger does not guaranteed pre-tagging.
    • A patch to the tagger provided – BrillPatch*.

* ”Guaranteed Pre-Tagging for the Brill Tagger”, Mohammad, S. and Pedersen, T., In Proceedings of Fourth International Conference of Intelligent Systems and Text Processing, February 2003, Mexico.

part of speech features
Part of Speech Features
  • A word used in different senses is likely to have different sets of pos tags around it.

Why did jack turn/VB against/IN his/PRP$ team/NN

Why did jack turn/VB left/NN at/IN the/DT crossing

  • Features used
    • Individual word POS: P-2, P-1, P0, P1, P2
      • P1 = JJ implies that the word to the right of the target word is an adjective.
    • A combination of the above.
parse features
Parse Features
  • Collins Parser* used to parse the data.
    • Source code available.
    • Uses part of speech tagged data as input.
  • Head word of a phrase.
    • the hardwork, the hard surface
    • Phrase itself : noun phrase, verb phrase and so on.
  • Parent : Head word of the parent phrase.
    • fasten the line, cross the line
    • Parent phrase.

* http://www.ai.mit.edu/people/mcollins

sample parse tree
Sample Parse Tree

SENTENCE

NOUN PHRASE

VERB PHRASE

Harry

cast

NOUN PHRASE

NNP

VBD

bewitching

spell

a

NN

JJ

DT

sense tagged data
Sense-Tagged Data
  • Senseval-2 data
    • 4,328 instances of test data and 8,611 instances of training data ranging over 73 different noun, verb and adjectives.
  • Senseval-1 data
    • 8,512 test instances and 13,276 training instances, ranging over 35 nouns, verbs and adjectives.
  • line, hard, interest, serve data
    • 4149, 4337, 4378 and 2476 sense-tagged instances with line, hard, serve and interest as the head words.

Around 50,000 sense-tagged instances in all!

thoughts
Thoughts…
  • Both lexical and syntactic features perform comparably.
  • But do they get the same instances right ?
    • How much are the individual feature sets redundant.
  • Are there instances correctly disambiguated by one feature set and not by the other ?
    • How much are the individual feature sets complementary.

Is the effort to combine of lexical and syntactic

features justified?

measures
Measures
  • Baseline Ensemble: accuracy of a hypothetical ensemble which predicts the sense correctly only if both individual feature sets do so.
    • Quantifies redundancy amongst feature sets.
  • Optimal Ensemble: accuracy of a hypothetical ensemble which predicts the sense correctly if either of the individual feature sets do so.
    • Difference with individual accuracies quantifies complementarity.

We used a simple ensemble which sums up the

probabilities for each sense by the individual feature

sets to decide the intended sense.

conclusions
Conclusions
  • Significant amount of complementarity across lexical and syntactic features.
  • Combination of the two justified.
  • We show that simple lexical and part of speech features can achieve state of the art results.
  • How best to capitalize on the complementarity still an open issue.
conclusions continued
Conclusions (continued)
  • Part of speech of word immediately to the right of target word found most useful.
    • Pos of words immediately to the right of target word best for verbs and adjectives.
    • Nouns helped by tags on either side.
    • (P0, P1) found to be most potent in case of small training data per instance (Sval data).
    • Larger pos context size (P-2,P-1,P0, P1 , P2) shown to be beneficial when training data per instance is large (line, hard, serve and interest data)
  • Head word of phrase particularly useful for adjectives
    • Nouns helped by both head and parent.
code data resources
Code, Data & Resources
  • SyntaLex : A system to do WSD using lexical and syntactic features. Weka’s decision tree learning algorithm is utilized.
  • posSenseval : part of speech tags any data in Senseval-2 data format. Brill Tagger used.
  • parseSenseval : parses data in a format as output by the Brill Tagger. Output is in Senseval-2 data format with part of speech and parse information as xml tags. Uses Collins Parser.
  • Packages to convert line hard, serve and interest data to Senseval-1 and Senseval-2 data formats.
  • BrillPatch : Patch to Brill Tagger to employ Guaranteed

Pre-Tagging.

http://www.d.umn.edu/~tpederse/code.html

http://www.d.umn.edu/~tpederse/data.html

slide28

Senseval-3 (Mar-1 to April 15, 2004) Around 8000 training and 4000 test instances. Results expected shortly.

Thank You