1 / 36

CPSC 503 Computational Linguistics

CPSC 503 Computational Linguistics. Lecture 9 Giuseppe Carenini. Knowledge-Formalisms Map. State Machines (and prob. versions) (Finite State Automata,Finite State Transducers, Markov Models ). Morphology. Logical formalisms (First-Order Logics). Syntax.

mahsa
Download Presentation

CPSC 503 Computational Linguistics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPSC 503Computational Linguistics Lecture 9 Giuseppe Carenini CPSC503 Winter 2008

  2. Knowledge-Formalisms Map State Machines (and prob. versions) (Finite State Automata,Finite State Transducers, Markov Models) Morphology Logical formalisms (First-Order Logics) Syntax Rule systems (and prob. versions) (e.g., (Prob.) Context-Free Grammars) Semantics Pragmatics Discourse and Dialogue AI planners CPSC503 Winter 2008

  3. Today 6/10 • Finish: The Earley Algorithm • Partial Parsing: Chuncking • Dependency Grammars / Parsing • Treebank CPSC503 Winter 2008

  4. Earley Parsing Procedure • So sweep through the table from 0 to n in order, applying one of three operators to each state: • predictor: add top-down predictions to the chart • scanner: read input and add corresponding state to chart • completer: move dot to right when new constituent found • Results (new states) added to current or next set of states in chart • No backtracking and no states removed CPSC503 Winter 2008

  5. Predictor • Intuition: new states represent top-down expectations • Applied when non-part-of-speech non-terminals are to the right of a dot S --> • VP [0,0] • Adds new states to end of current chart • One new state for each expansion of the non-terminal in the grammar VP --> • Verb [0,0] VP --> • Verb NP [0,0] CPSC503 Winter 2008

  6. Scanner (part of speech) • New states for predicted part of speech. • Applicable when part of speech is to the right of a dot VP --> • Verb NP [0,0]( 0 “Book…”1 ) • Looks at current word in input • If match, adds state(s) to next chart Verb --> • book [0,1] CPSC503 Winter 2008

  7. Completer • Intuition: we’ve found a constituent, so tell everyone waiting for this • Applied when dot has reached right end of rule NP --> Det Nom • [1,3] • Find all states w/dot at 1 and expecting an NP VP --> Verb • NP [0,1] • Adds new (completed) state(s) to current chart VP --> Verb NP • [0,3] CPSC503 Winter 2008

  8. So far only a recognizer… • To generate all parses: • When old states waiting for the just completed constituent are updated => add a pointer from each “updated” to “completed” • Then simply read off all the backpointers from every complete S in the last column of the table Chart [0] ….. S5 S->.VP [0,0] [] S6 VP -> . Verb [0,0] [] S7 VP -> . Verb NP [0,0] [] …. Chart [1] S8 Verb -> book . [0,1] [] S9 VP -> Verb . [0,1] [S8] S10 S->VP. [0,1] [S9] S11 VP->Verb . NP [0,1] [??] …. CPSC503 Winter 2008

  9. Error Handling • What happens when we look at the contents of the last table column and don't find a S -->  state? • Is it a total loss? No... • Chart contains every constituent and combination of constituents possible for the input given the grammar • Also useful for partial parsing or shallow parsing used in information extraction CPSC503 Winter 2008

  10. Dynamic Programming Approaches • Earley • Top-down, no filtering, no restriction on grammar form • CKY • Bottom-up, no filtering, grammars restricted to Chomsky-Normal Form (CNF) (i.e., -free andeach production either A-> BC or A-> a) CPSC503 Winter 2008

  11. Today 6/10 • Finish: The Earley Algorithm • Partial Parsing: Chuncking • Dependency Grammars / Parsing • Treebank CPSC503 Winter 2008

  12. Chunking • Classify only basic non-recursive phrases (NP, VP, AP, PP) • Find non-overlapping chunks • Assign labels to chunks • Chunk: typically includes headword and pre-head material [NP The HD box] that [NP you] [VP ordered] [PP from][NP Shaw][VP never arrived] CPSC503 Winter 2008

  13. Approaches to Chunking (1): Finite-State Rule-Based • Set of hand-crafted rules (no recursion!) e.g., NP -> (Det) Noun* Noun • Implemented as FSTs (unionized/deteminized/minimized) • F-measure 85-92 • To build tree-like structures several FSTs can be combined [Abney ’96] CPSC503 Winter 2008

  14. Approaches to Chunking (1): Finite-State Rule-Based • … several FSTs can be combined CPSC503 Winter 2008

  15. Approaches to Chunking (2): Machine Learning • A case of sequential classification • IOB tagging: (I) internal, (O) outside, (B) beginning • Internal and Beginning for each chunk type => size of tagset (2n + 1)where n is the num of chunk types • Find an annotated corpus • Select feature set • Select and train a classifier CPSC503 Winter 2008

  16. Context window approach • Typical features: • Current / previous / following words • Current / previous / following POS • Previous chunks CPSC503 Winter 2008

  17. Context window approach • Specific choice of machine learning approach does not seem to matter • F-measure 92-94 range • Common causes of errors: • POS tagger inaccuracies • Inconsistencies in training corpus • Inaccuracies in identifying heads • Ambiguities involving conjunctions (e.g., “late arrivals and cancellations/departure are common in winter” ) CPSC503 Winter 2008

  18. Today 6/10 • Finish: The Earley Algorithm • Partial Parsing: Chuncking • Dependency Grammars / Parsing • Treebank CPSC503 Winter 2008

  19. Dependency Grammars • Syntactic structure: binary relations between words • Links: grammatical function or very general semantic relation • Abstract away from word-order variations (simpler grammars) • Useful features in many NLP applications (for classification, summarization and NLG) CPSC503 Winter 2008

  20. Dependency Grammars (more verbose) • In CFG-style phrase-structure grammars the main focus is on constituents. • But it turns out you can get a lot done with just binary relations among the words in an utterance. • In a dependency grammar framework, a parse is a tree where • the nodes stand for the words in an utterance • The links between the words represent dependency relations between pairs of words. • Relations may be typed (labeled), or not. CPSC503 Winter 2008

  21. Dependency Relations Show grammar primer CPSC503 Winter 2008

  22. Dependency Parse (ex 1) They hid the letter on the shelf CPSC503 Winter 2008

  23. Dependency Parse (ex 2) CPSC503 Winter 2008

  24. Dependency Parsing (see MINIPAR / Stanford demos) • Dependency approach vs. CFG parsing. • Deals well with free word order languages where the constituent structure is quite fluid • Parsing is much faster than CFG-based parsers • Dependency structure often captures all the syntactic relations actually needed by later applications CPSC503 Winter 2008

  25. Dependency Parsing • There are two modern approaches to dependency parsing (supervised learning from Treebank data) • Optimization-based approaches that search a space of trees for the tree that best matches some criteria • Transition-based approaches that define and learn a transition system (state machine) for mapping a sentence to its dependency graph CPSC503 Winter 2008

  26. Today 6/10 • Finish: The Earley Algorithm • Partial Parsing: Chuncking • Dependency Grammars / Parsing • Treebank CPSC503 Winter 2008

  27. Treebanks • DEF. corpora in which each sentence has been paired with a parse tree • These are generally created • Parse collection with parser • human annotators revise each parse • Requires detailed annotation guidelines • POS tagset • Grammar • instructions for how to deal with particular grammatical constructions. CPSC503 Winter 2008

  28. Penn Treebank • Penn TreeBank is a widely used treebank. • Most well known is the Wall Street Journal section of the Penn TreeBank. • 1 M words from the 1987-1989 Wall Street Journal. CPSC503 Winter 2008

  29. Treebank Grammars • Treebanks implicitly define a grammar. • Simply take the local rules that make up the sub-trees in all the trees in the collection • if decent size corpus, you’ll have a grammar with decent coverage. CPSC503 Winter 2008

  30. Treebank Grammars • Such grammars tend to be very flat due to the fact that they tend to avoid recursion. • To ease the annotators burden • For example, the Penn Treebank has 4500 different rules for VPs! Among them... CPSC503 Winter 2008

  31. Heads in Trees • Finding heads in treebank trees is a task that arises frequently in many applications. • Particularly important in statistical parsing • We can visualize this task by annotating the nodes of a parse tree with the heads of each corresponding node. CPSC503 Winter 2008

  32. Lexically Decorated Tree CPSC503 Winter 2008

  33. Head Finding • The standard way to do head finding is to use a simple set of tree traversal rules specific to each non-terminal in the grammar. CPSC503 Winter 2008

  34. Noun Phrases CPSC503 Winter 2008

  35. Treebank Uses • Searching a Treebank. TGrep2 NP < PP or NP << PP • Treebanks (and headfinding) are particularly critical to the development of statistical parsers • Chapter 14 • Also valuable to Corpus Linguistics • Investigating the empirical details of various constructions in a given language CPSC503 Winter 2008

  36. Next time: read Chpt 14 State Machines (and prob. versions) (Finite State Automata,Finite State Transducers, Markov Models) Morphology Logical formalisms (First-Order Logics) Syntax Rule systems (and prob. versions) (e.g., (Prob.)Context-Free Grammars) Semantics Pragmatics Discourse and Dialogue AI planners CPSC503 Winter 2008

More Related