260 likes | 416 Views
A Parse-and-Trim Approach with Information Significance for Chinese Sentence Compression. New York University Wei XU & Ralph Grishman @ACL 2009, Singapore. Outline. Motivation & Previous Work Sentence Compression Approach Linguistically-motivated Heuristics Word Significance
E N D
A Parse-and-Trim Approach with Information Significancefor Chinese Sentence Compression New York University Wei XU & Ralph Grishman @ACL 2009, Singapore
Outline • Motivation & Previous Work • Sentence Compression Approach • Linguistically-motivated Heuristics • Word Significance • Compression Generation and Selection • Experiment Results • Conclusions & Future Work
Motivation • no Chinese parallel corpus • hard to create a sentence/compression parallel corpus
An example • An example of system output
Previous Work Sentence Compression Non-Corpus-Based Supervised Learning Headline Generation Parse Tree Trim •Dorr 2003 Decision Tree •Knight 2002 •Nguyen 2004 Unsupervised Learning Noisy Channel •Turner 2005 •Knight 2002 •Galley 2007 Large Margin Learning •McDonald 2006 •Cohn 2007 •Cohn 2008 Sentence Scoring •Hori 2003 •Clarke 2006+ •Clarke 2008+ Japanese Speech Paraphrasing Corpus MaxEnt •Riezler 2003
Previous Work (cont’) • Parse Tree Trimming(Dorr et al. 2003) • linguistically-motivated heuristics • hand-made rules to remove low content components • iteratively trim until reach desired length • reduce the risk of deleting important information by applying rules in a certain order safe rules (DT, TIME) more dangerous rules (CONJ) the most dangerous rules (PP)
Previous Work (cont’) • Parse Tree Trimming (Dorr et al. 2003) • Pros: • comparative good performance • retain grammaticality if parsing is correct • Cons: • require considerable linguist’s skill to produce proper rules in a proper order • sensitive to POS and parsing errors • not flexible and capable to preserve informative components
Previous Work (cont’) • Sentence Scoring (Hori & Furui 2004) • improved by Clarke & Lapata in 2006 • given an input sentence W = w1, w2, … , wn • ranking possible compressions • language model + word significance • Score(compressed sentence C) = p1 * Word Significance Score (all words in C) + p2 * Language Model Score (C) + p3 * Subject-Object-Verb Score (all words in C)
Previous Work (cont’) • Sentence Scoring (Hori & Furui 2004) • language model • word significance • Pros: • do not rely heavily on training corpus • Cons: • the weighting parameters are experimentally optimized or estimated by a parallel corpus. • use only language model to encourage compression and ensure grammaticality
Solution • Combine • Linguistically-motivated Heuristics • ensure grammaticality • rules are easier to develop, determining only possible low content components instead of selecting specific constituents for removal • Information Significance Scoring • preserve the most important information • enhance the tolerance of POS and parsing errors
Solution (cont’) • Combined Approach: Heuristics + Information Significance • use heuristic to determine potentially low content constituents • do real deletion according to word significance
Pipeline (combined approach) • 1. take a Chinese Treebank-style parse as input • 2. use linguistically-motivated heuristics to determine potentially removable constituents • 3. generate a series of candidate compressions by deleting removable nodes based on word significance • 4. select the best compressing according to information density
Linguistically-motivated Heuristics Combined Approach: Heuristics + Information Significance • Used to determine potentially low content constituents • Basic: (same) • parenthetical elements • adverbs except negative • adjectives • DNPs (phrase + “的”, modifiers of NP) • DVPs (phrase + “地”, modifiers of VP) • noun coordination phrases • Complex: (more relaxed, general) • verb coordination phrases • relative clauses • appositive clauses • prepositional phrases • all children of NP nodes except the last noun word • sentential coordination
Linguistically-motivated Heuristics Heuristics-only Approach • Used to remove specific low content constituents • Basic: (same) • parenthetical elements • adverbs except negative • adjectives • DNPs (phrase + “的”, modifiers of NP) • DVPs (phrase + “地”, modifiers of VP) • noun coordination phrases • Complex: (more strict, conservative) • all children of NP nodes except temporal nouns and proper nouns and the last noun word • all simple clauses (IP) except the first one in sentential coordination • prepositional phrases except those that may contain location or date information, according to a hand-made list of prepositions
Linguistically-motivated Heuristics • An example of applying heuristics • *: nodes labeled as removable by combined approach • #: nodes trimmed out by heuristics-only approach POS error
Linguistically-motivated Heuristics • An example of applying heuristics • *: nodes labeled as removable by combined approach • #: nodes trimmed out by heuristics-only approach trimmed out by heuristic-only approach
Word Significance • Event-based Word Significance Score • verb or common noun: tf-idf • proper noun: tf-idf + w • 0therwise: 0 • weighted parsing tree • depend on word itself regardless of POS • overcome some POS errors
Compression Generation • Generate a series of candidate compressions • by repeatedly trimming the weighted parse tree • greedy algorithm • remove one node with the lowest weight and get a candidate compressed sentence • update the weights of all ancestors of the removed node • repeat until no node is removable
Compression Selection • Information Density • used to select the best compression
Compression Selection (cont’) • Information Density
Experiments • 79 documents from Chinese newswires • the first sentence of each news article • challenging task • headline-like compression • average length : 61.5 characters • often connects two or more self-complete sentences together
Experiments (cont’) • Human evaluation * The combined approach sacrifices grammaticality to reduce the linguistic complexity of the heuristics ** word significance improves the heuristics on informativeness *** with varying length constraints, depending on original sentence length
Experiments (cont’) • compression with good grammar • perform well on most of the cases • perform terribly on about 20 cases out of all 76 • POS or parsing errors • grammatically correct but semantically incorrect
Contributions • First attempt in Chinese • heuristics • ensure grammaticality • word significance • control word deletion, balancing sentence length and information loss • Pros: • not rely on parallel corpus • reduce the complexity of composing heuristics • easily extend to other languages or domains • overcome some POS and parsing errors • competitive to a finely-tuned heuristics-only approach
Future Work • applications in summarization, headline generation • keyword selection and weighting • language model • parallel corpus in Chinese • statistical, machine learning
A Parse-and-Trim Approach with Information Significancefor Chinese Sentence Compression Questions? New York University Wei XU & Ralph Grishman @ACL 2009, Singapore