1 / 63

Statistical NLP Winter 2009

Statistical NLP Winter 2009. Lecture 11: Parsing II. Roger Levy Thanks to Jason Eisner & Dan Klein for slides. PCFGs as language models. What does the goal weight (neg. log- prob ) represent? It is the probability of the most probable tree whose yield is the sentence

cissy
Download Presentation

Statistical NLP Winter 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical NLPWinter 2009 Lecture 11: Parsing II Roger Levy Thanks to Jason Eisner & Dan Klein for slides

  2. PCFGs as language models • What does the goal weight (neg. log-prob) represent? • It is theprobability of the most probable tree whose yield is the sentence • Suppose we want to do language modeling • We are interested in the probability of all trees “Put the file in the folder” vs. “Put the file and the folder”

  3. 2-22 2-27 2-27 2-22 2-27 Could just add up the parse probabilities oops, back to finding exponentially many parses 1 S  NP VP 6 S  Vst NP 2 S  S PP 1 VP  V NP 2 VP  VP PP 1 NP  Det N 2 NP  NP PP 3 NP  NP NP 0 PP  P NP

  4. Any more efficient way? 2-8 1 S  NP VP 6 S  Vst NP 2-2 S  S PP 1 VP  V NP 2 VP  VP PP 1 NP  Det N 2 NP  NP PP 3 NP  NP NP 0 PP  P NP 2-22 2-27

  5. Add as we go … (the “inside algorithm”) 2-8+2-13 1 S  NP VP 6 S  Vst NP 2-2 S  S PP 1 VP  V NP 2 VP  VP PP 1 NP  Det N 2 NP  NP PP 3 NP  NP NP 0 PP  P NP 2-22 +2-27

  6. Add as we go … (the “inside algorithm”) 2-22 +2-27 2-8+2-13 2-22 +2-27 +2-27 1 S  NP VP 6 S  Vst NP 2-2 S  S PP 1 VP  V NP 2 VP  VP PP 1 NP  Det N 2 NP  NP PP 3 NP  NP NP 0 PP  P NP +2-22 +2-27

  7. Charts and lattices • You can equivalently represent a parse chart as a lattice constructed over some initial arcs • This will also set the stage for Earley parsing later S  NP VP VP V NP VPV NP N NP N N S NP S NP VP S N NP N NP V VP N NP V VP salt flies scratch

  8. (Speech) Lattices • There was nothing magical about words spanning exactly one position. • When working with speech, we generally don’t know how many words there are, or where they break. • We can represent the possibilities as a lattice and parse these just as easily. Ivan eyes of awe an I a saw van ‘ve

  9. Speech parsing mini-example • Grammar (negative log-probs): • [We’ll do it on the board if there’s time] 1 S NP VP 1 VP  V NP 2 VP  V PP 2 NP  DT NN 2 NP  DT NNS 3 NP  NNP 2 NP PRP 0 PP IN NP 1 PRP  I 9 NNP  Ivan 6 V  saw 4 V  ’ve 7 V  awe 2 DT  a 2 DT  an 3 IN  of 6 NNP  eyes 9 NN  awe 6 NN  saw 5 NN  van

  10. Better parsing • We’ve now studied how to do correct parsing • This is the problem of inference given a model • The other half of the question is how to estimate a good model • That’s what we’ll spend the rest of the day on

  11. Problems with PCFGs? • If we do no annotation, these trees differ only in one rule: • VP  VP PP • NP  NP PP • Parse will go one way or the other, regardless of words • We’ll look at two ways to address this: • Sensitivity to specific words through lexicalization • Sensitivity to structural configuration with unlexicalized methods

  12. Problems with PCFGs? • [insert performance statistics for a vanilla parser]

  13. Problems with PCFGs • What’s different between basic PCFG scores here? • What (lexical) correlations need to be scored?

  14. Problems with PCFGs • Another example of PCFG indifference • Left structure far more common • How to model this? • Really structural: “chicken with potatoes with gravy” • Lexical parsers model this effect, though not by virtue of being lexical

  15. PCFGs and Independence • Symbols in a PCFG define conditional independence assumptions: • At any node, the material inside that node is independent of the material outside that node, given the label of that node. • Any information that statistically connects behavior inside and outside a node must flow through that node. S S  NP VP NP  DT NN NP NP VP

  16. Solution(s) to problems with PCFGs • Two common solutions seen for PCFG badness: • Lexicalization: put head-word information into categories, and use it to condition rewrite probabilities • State-splitting:distinguish sub-instances of more general categories (e.g., NP into NP-under-S vs. NP-under-VP) • You can probably see that (1) is a special case of (2) • More generally, the solution involves information propagation through PCFG nodes

  17. Lexicalized Trees • Add “headwords” to each phrasal node • Syntactic vs. semantic heads • Headship not in (most) treebanks • Usually use head rules, e.g.: • NP: • Take leftmost NP • Take rightmost N* • Take rightmost JJ • Take right child • VP: • Take leftmost VB* • Take leftmost VP • Take left child • How is this information propagation?

  18. Lexicalized PCFGs? • Problem: we now have to estimate probabilities like • Never going to get these atomically off of a treebank • Solution: break up derivation into smaller steps

  19. VP[saw] (VP->VBD...PP )[saw] (VP->VBD...NP )[saw] PP[on] (VP->VBD...NP )[saw] NP[today] (VP->VBD )[saw] NP[her] Lexical Derivation Steps • Simple derivation of a local tree [simplified Charniak 97] VP[saw] Still have to smooth with mono- and non-lexical backoffs VBD[saw] NP[her] NP[today] PP[on] It’s markovization again! VBD[saw]

  20. Lexical Derivation Steps • Another derivation of a local tree [Collins 99] Choose a head tag and word Choose a complement bag Generate children (incl. adjuncts) Recursively derive children

  21. Naïve Lexicalized Parsing • Can, in principle, use CKY on lexicalized PCFGs • O(Rn3) time and O(Sn2) memory • But R = rV2 and S = sV • Result is completely impractical (why?) • Memory: 10K rules * 50K words * (40 words)2 * 8 bytes  6TB • Can modify CKY to exploit lexical sparsity • Lexicalized symbols are a base grammar symbol and a pointer into the input sentence, not any arbitrary word • Result: O(rn5) time, O(sn3) • Memory: 10K rules * (40 words)3 * 8 bytes 5GB • Now, why do we get these space & time complexities?

  22. Another view of CKY The fundamental operation is edge-combining VP X VP(-NP) NP Y Z bestScore(X,i,j,h) if (j = i+1) return tagScore(X,s[i]) else return max score(X -> Y Z) * bestScore(Y,i,k) * bestScore(Z,k,j) ikj k,X->YZ Two string-position indices required to characterize each edge in memory Three string-position indices required to characterize each edge combination in time

  23. Lexicalized CKY • Lexicalized CKY has the same fundamental operation, just more edges X[h] VP [saw] Y[h] Z[h’] VP(-NP)[saw] NP [her] bestScore(X,i,j,h) if (j = i+1) return tagScore(X,s[i]) else return max max score(X[h]->Y[h] Z[h’]) * bestScore(Y,i,k,h) * bestScore(Z,k,j,h’) max score(X[h]->Y[h’] Z[h]) * bestScore(Y,i,k,h’) * bestScore(Z,k,j,h) i h k h’ j k,X->YZ Three string positions for each edge in space Five string positions for each edge combination in time k,X->YZ

  24. Dependency Parsing • Lexicalized parsers can be seen as producing dependency trees • Each local binary tree corresponds to an attachment in the dependency graph questioned lawyer witness the the

  25. Dependency Parsing • Pure dependency parsing is only cubic [Eisner 99] • Some work on non-projective dependencies • Common in, e.g. Czech parsing • Can do with MST algorithms [McDonald and Pereira 05] • Leads to O(n3) or even O(n2) [McDonald et al., 2005] X[h] h Y[h] Z[h’] h h’ i h k h’ j h k h’

  26. Pruning with Beams • The Collins parser prunes with per-cell beams [Collins 99] • Essentially, run the O(n5)CKY • Remember only a few hypotheses for each span <i,j>. • If we keep K hypotheses at each span, then we do at most O(nK2) work per span (why?) • Keeps things more or less cubic • Side note/hack: certain spans are forbidden entirely on the basis of punctuation (crucial for speed) X[h] Y[h] Z[h’] i h k h’ j

  27. Pruning with a PCFG • The Charniak parser prunes using a two-pass approach [Charniak 97+] • First, parse with the base grammar • For each X:[i,j] calculate P(X|i,j,s) • This isn’t trivial, and there are clever speed ups • Second, do the full O(n5)CKY • Skip any X :[i,j] which had low (say, < 0.0001) posterior • Avoids almost all work in the second phase! • Currently the fastest lexicalized parser • Charniak et al 06: can use more passes • Petrov et al 07: can use many more passes

  28. Typical Experimental Setup • Corpus: Penn Treebank, WSJ • Evaluation by precision, recall, F1 (harmonic mean) 0 1 2 0 1 2 3 3 6 Precision=Recall=7/8 8 4 5 6 4 5 7 7 8

  29. Results • Some results • Collins 99 – 88.6 F1 (generative lexical) • Petrovet al 06 – 90.7 F1 (generative unlexical) • However • Bilexical counts rarely make a difference (why?) • Gildea 01 – Removing bilexical counts costs < 0.5 F1 • Bilexical vs. monolexical vs. smart smoothing

  30. Unlexicalized methods • So far we have looked at the use of lexicalized methods to fix PCFG independence assumptions • Lexicalization creates new complications of inference (computational complexity) and estimation (sparsity) • There are lots of improvements to be made without resorting to lexicalization

  31. PCFGs and Independence • Symbols in a PCFG define independence assumptions: • At any node, the material inside that node is independent of the material outside that node, given the label of that node. • Any information that statistically connects behavior inside and outside a node must flow through that node. S S  NP VP NP  DT NN NP NP VP

  32. Non-Independence I • Independence assumptions are often too strong. • Example: the expansion of an NP is highly dependent on the parent of the NP (i.e., subjects vs. objects). • Also: the subject and object expansions are correlated! All NPs NPs under S NPs under VP

  33. Non-Independence II • Who cares? • NB, HMMs, all make false assumptions! • For generation, consequences would be obvious. • For parsing, does it impact accuracy? • Symptoms of overly strong assumptions: • Rewrites get used where they don’t belong. • Rewrites get used too often or too rarely. In the PTB, this construction is for possessives

  34. Breaking Up the Symbols • We can relax independence assumptions by encoding dependencies into the PCFG symbols: • What are the most useful “features” to encode? Marking possessive NPs Parent annotation [Johnson 98]

  35. Annotations • Annotations split the grammar categories into sub-categories (in the original sense). • Conditioning on history vs. annotating • P(NP^S  PRP) is a lot like P(NP  PRP | S) • Or equivalently, P(PRP | NP, S) • P(NP-POS NNP POS) isn’t history conditioning. • Feature / unification grammars vs. annotation • Can think of a symbol like NP^NP-POS as NP [parent:NP, +POS] • After parsing with an annotated grammar, the annotations are then stripped for evaluation.

  36. Lexicalization • Lexical heads important for certain classes of ambiguities (e.g., PP attachment): • Lexicalizing grammar creates a much larger grammar. (cf. next week) • Sophisticated smoothing needed • Smarter parsing algorithms • More data needed • How necessary is lexicalization? • Bilexical vs. monolexical selection • Closed vs. open class lexicalization

  37. Unlexicalized PCFGs • What is meant by an “unlexicalized” PCFG? • Grammar not systematically specified to the level of lexical items • NP [stocks] is not allowed • NP^S-CC is fine • Closed vs. open class words (NP^S [the]) • Long tradition in linguistics of using function words as features or markers for selection • Contrary to the bilexical idea of semantic heads • Open-class selection really a proxy for semantics • It’s kind of a gradual transition from unlexicalized to lexicalized (but heavily smoothed) grammars.

  38. Typical Experimental Setup • Corpus: Penn Treebank, WSJ • Accuracy – F1: harmonic mean of per-node labeled precision and recall. • Here: also size – number of symbols in grammar. • Passive / complete symbols: NP, NP^S • Active / incomplete symbols: NP  NP CC 

  39. Multiple Annotations • Each annotation done in succession • Order does matter • Too much annotation and we’ll have sparsity issues

  40. Horizontal Markovization Order  Order 1

  41. Vertical Markovization • Vertical Markov order: rewrites depend on past k ancestor nodes. (cf. parent annotation) Order 2 Order 1

  42. Markovization • This leads to a somewhat more general view of generative probabilistic models over trees • Main goal: estimate P( ) • A bit of an interlude: Tree-Insertion Grammars deal with this problem more directly.

  43. TIG: Insertion

  44. Data-oriented parsing (Bod 1992) • A case of Tree-Insertion Grammars • Rewrite large (possibly lexicalized) subtrees in a single step • Derivational ambiguity whether subtrees were generated atomically or compositionally • Most probable parse is NP-complete due to unbounded number of “rules”

  45. Markovization, cont. • So the question is, how do we estimate these tree probabilities • What type of tree-insertion grammar do we use? • Equivalently, what type of independence assuptions do we impose? • Traditional PCFGs are only one type of answer to this question

  46. Vertical and Horizontal • Examples: • Raw treebank: v=1, h= • Johnson 98: v=2, h= • Collins 99: v=2, h=2 • Best F1: v=3, h=2v

  47. Tag Splits • Problem: Treebank tags are too coarse. • Example: Sentential, PP, and other prepositions are all marked IN. • Partial Solution: • Subdivide the IN tag.

  48. Other Tag Splits • UNARY-DT: mark demonstratives as DT^U (“the X” vs. “those”) • UNARY-RB: mark phrasal adverbs as RB^U (“quickly” vs. “very”) • TAG-PA: mark tags with non-canonical parents (“not” is an RB^VP) • SPLIT-AUX: mark auxiliary verbs with –AUX [cf. Charniak 97] • SPLIT-CC: separate “but” and “&” from other conjunctions • SPLIT-%: “%” gets its own tag.

  49. Treebank Splits • The treebank comes with some annotations (e.g., -LOC, -SUBJ, etc). • Whole set together hurt the baseline. • One in particular is very useful (NP-TMP) when pushed down to the head tag (why?). • Can mark gapped S nodes as well.

  50. Yield Splits • Problem: sometimes the behavior of a category depends on something inside its future yield. • Examples: • Possessive NPs • Finite vs. infinite VPs • Lexical heads! • Solution: annotate future elements into nodes. • Lexicalized grammars do this (in very careful ways – why?).

More Related