1 / 40

Partial Prebracketing to Improve Parser Performance

Partial Prebracketing to Improve Parser Performance. John Judge NCLT Seminar Series 7 th December 2005. Overview. Background and Motivation Prebracketing NE/MWE Markup Labelled Bracketing Constituent Markup NE/MWE + LBCM Combined Grammars compared Conclusions.

jamese
Download Presentation

Partial Prebracketing to Improve Parser Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Partial Prebracketing to Improve Parser Performance John Judge NCLT Seminar Series 7th December 2005

  2. Overview • Background and Motivation • Prebracketing • NE/MWE Markup • Labelled Bracketing Constituent Markup • NE/MWE + LBCM Combined • Grammars compared • Conclusions

  3. Background and Motivation • Parse annotated corpora are crucial for developing machine learning and statistics based parsing resources • Large treebanks are available for major languages • For other languages there is a lack of such resources for grammar induction

  4. Background and Motivation • Treebank construction is usually semi-automatic (Penn Treebank, NEGRA) • Raw text is parsed • Annotator corrects parser output • Propose a new method • Text is pre-processed to help parser • Pre-processed text is parsed • Annotator corrects parser output • Hopefully the output will be better quality and correction will be quicker and easier for the annotator

  5. Relevance to my work • Previous work on question analysis has identified a need for a suitable training corpus • Plan to be able to use this technique for developing a question treebank • Method is general so it can be applied to other areas/languages

  6. Prebracketing • Marking up the input text with information that will help the parser parse the sentence properly • Named Entities • Multi-Word Expressions • Constituents VP, PP • Prebracketing can be done automatically (NE, MWE) or manually (constituents) • Parser (LoPar, Schmidt (2000)) will respect this markup when parsing

  7. Prebracketing Robert Erwin, president ofBiosource, calledPlant Genetic’s approach``Interesting ’’ and``novel, ’’ and`` complementaryrather than competitive. ’’

  8. Prebracketing: Named Entities Robert Erwin, president ofBiosource, calledPlant Genetic’s approach``Interesting ’’ and``novel, ’’ and`` complementaryrather than competitive. ’’

  9. Prebracketing: Multi-Word Expressions Robert Erwin, president ofBiosource, calledPlant Genetic’s approach``Interesting ’’ and``novel, ’’ and`` complementaryrather than competitive. ’’

  10. Prebracketing: Constituents Robert Erwin, president ofBiosource,calledPlant Genetic’s approach``Interesting ’’ and``novel, ’’ and`` complementaryrather than competitive. ’’

  11. Named Entities • Names of people, places, companies, products, etc. • Similar to Proper Nouns • Internal structure isn’t really important • Can be treated as a single lexical unit

  12. Parsing NE’s • Parser doesn’t normally output NEs • Retrain on NE annotated treebank • Add NE annotation to sections 2-21 of Penn-II Treebank to use as training • Add NE annotations to section 23 as test set

  13. Transforming the Corpus • Use named entity recogniser, Lingpipe, to pick out the entities (http://alias-i.com/lingpipe/) • Use (slightly hacked) NE transformation routine in the LFG Annotation Algortithm to transform the trees

  14. Example Tree Before Transformation NP-SBJ NP NP , , NNP NNP NP PP , , Robert Erwin IN NP NN of NNP president Biosource

  15. Example Tree After Transformation NP-SBJ NE NP , , NP PP , , Robert Erwin IN NE NN of president Biosource

  16. Example Prebracketed Input • Parser input is generated by systematically stripping markup from the gold standard trees, leaving in only NE markup ((S (NP-SBJ (NE Robert Erwin)…(JJ competitive))))))(. .)('' ''))) (NE Robert Erwin), president of(NE Biosource), called(NE Plant Genetic)’s approach``Interesting ’’ and``novel, ’’ and`` complementaryrather than competitive. ’’

  17. Results for NE Parsing Section 23

  18. Multi-Word Expressions • Short expressions that are interpreted as a single unit eg. with respect to, vice versa, all of a sudden, according to, as well as … • Can correspond to a number of syntactic categories • Treat as a single lexical unit and ignore internal structure

  19. Parsing MWE’s • Parser doesn’t normally output MWE’s • As with NE’s Penn-II is transformed to contain MWE’s • Parser is retrained on MWE annotated treebank • Tested on MWE annotated version of Section 23

  20. Transforming the corpus • Use a list of multi-word expressions from the Stanford University Multi-Word Expression Project (http://mwe.stanford.edu/) • Transform the corpus using (an even more hacked) NE transformation routine in the LFG Annotation Algorithm

  21. Example Tree Before Transformation PP RB IN ADJP JJ rather than competitive

  22. Example Tree After Transformation PP MWE ADJP JJ rather than competitive

  23. Example Prebracketed Input • Parser input is generated by systematically stripping markup from the gold standard trees, leaving in only MWE markup ((S (NP-SBJ (NP (NNP Robert)(NNP Erwin)…(JJ competitive))))))(. .)('' ''))) Robert Erwin, president ofBiosource, calledPlant Genetic’s approach``Interesting ’’ and``novel, ’’ and`` complementary(MWE rather than) competitive. ’’

  24. Results for MWE Parsing Section 23

  25. NE and MWE Markup Combined • Marking up NEs and MWEs individually has yielded an improvement on the baseline • Try doing both together • Treebank is transformed as before but marking up both NEs and MWEs • Prebracketed input can also contain both NEs and MWEs

  26. Results for NE/MWE Combined Parsing • MWE score is almost 1% better than when using MWEs alone • NE score is over half a percent worse than when using NEs alone • Coverage is up slightly on both runs

  27. Labelled Bracketing Constituent Markup • Instead of marking up lexicalised units, sentence constituents (VP, PP) are marked up • No transformations are necessary as original treebank trees produce constituents in output

  28. Generating LBCM input • Systematically strip markup from the gold standard trees, leaving in only selected markup ((S (NP-SBJ (NP (NNP Robert)(NNP Erwin)…(JJ competitive))))))(. .)('' ''))) Robert Erwin, president ofBiosource,(VP calledPlant Genetic’s approach``Interesting ’’ and``novel, ’’ and`` complementaryrather than competitive). ’’ • Simulating “ideal” human generated input

  29. Results for LBCM

  30. Combining NE, MWE and LBCM Prebracketing • Taking the corpora from NE and MWE combination and preprocessing input as for LBCM • (NE Robert Erwin), president of(NE Biosource),(VP called(NE Plant Genetic)’s approach``Interesting ’’ and``novel, ’’ and`` complementary(MWE rather than) competitive). ’’

  31. Results for all 3

  32. Looking at the Grammar/Lexicon • Expect to reduce grammar/lexicon size by conflating NEs and MWEs • Compare 4 grammars and lexicons • Plain PCFG • NE PCFG • MWE PCFG • NE/MWE PCFG

  33. Comparison

  34. Why are the Grammars Growing? • Expectation was that the grammar/lexicon would shrink • Instead they’re growing in size • Caused by the lexicon • Many MWEs are appearing as a single MWE entry and also their individual words • Likewise for NEs • Introducing new categories • Correspondingly more rules in the grammar

  35. Multi-Word Expression Example • all of a sudden • all of a sudden MWE 2 • All of a sudden MWE 2 • all DT 800 PDT 182 RB 36 • of IN 22741 RP 2 • a DT 19895 FW 6 LS 1 NNP 2 SYM 10 • sudden JJ 32

  36. Named Entity Example • Winston Churchill • Winston Churchill NE 1 • Churchill NE 2 • George Bush • George Bush NE 26 • George NE 5 • Bush NE 344

  37. Knock-on effects • Grammar/lexicon size is growing • Parse time is increasing • A likely cause of the gains in terms of precision, recall and f-score being small • NE/MWE analysis of a phrase is not the most likely according to the grammar • Consequently a less likely parse is output

  38. Summary • Generating NE/MWE PCFG grammars in this way is possible • Unexpectedly, these grammars are larger than plain PCFG grammars • Results show something can be gained from prebracketing input • However, even the best result 70.16 (prebracketing topmost VP) is considerabley less than history-based parsers (Charniak, Collins, Bikel)

  39. Further Work • Better NE recognition • More sophisticated transformation method • Different/larger MWE lists • Experiment with history-based parsers for better results*

  40. Thanks • Any questions or comments?

More Related