1 / 20

Automatic Sentence Compression in the MUSA project

Automatic Sentence Compression in the MUSA project. Walter Daelemans & Anja Höthker walter.daelemans@ua.ac.be http://cnts.uia.ac.be CNTS, University of Antwerp, Belgium Languages & The Media 2004, Berlin. MUSA. MU ltilingual S ubtitling of multimedi A content

yosef
Download Presentation

Automatic Sentence Compression in the MUSA project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic Sentence Compression in the MUSA project Walter Daelemans & Anja Höthker walter.daelemans@ua.ac.be http://cnts.uia.ac.be CNTS, University of Antwerp, Belgium Languages & The Media 2004, Berlin

  2. MUSA • MUltilingual Subtitling of multimediA content EU IST 5th framework, Sep. 2002 - Feb. 2005 • Goals • Conversion of audio streams into TV subtitles (monolingual) • Translation of subtitles into French or Greek

  3. Partners • ILSP, Athens: coordination, integration • ESAT, KU Leuven: Automatic Speech Recognition • CNTS, U. Antwerp: Sentence compression • Systran, Paris: Machine Translation • BBC, London: Main User, Data provider, Evaluation • Lumiere, Athens: Main User, Multilingual Data Provider, Evaluation

  4. Goals for Sentence Compression • Automatically and dynamically generate subtitles based on constraints (words and characters) • Reduce the time needed for producing subtitles by expert subtitler • Provide an architecture that can easily be ported to other languages

  5. Example • SPEECH: The task force is in place and ready to attack without mercy. • Constraints: Delete 3 words and 14 characters • Compression Module output: The task force is [ in place and ] ready to fight[ without mercy ] . • SUBTITLE: The task force is ready ... ...to fight without mercy.

  6. Approach • Remove disfluencies: compress sentence by removing repetitions introduced by hesitation I, Iknow that this war, this warwill last for years • Paraphrasing: replace part of the input sentence by shorter paraphrase an increasing number of  more and more • Rule-Based Approach: compress sentences based on handcrafted deletion rules that combine: • Shallow-parsing information (identifying constituents used by deletion rules) • Relevance measures (determine in which order to delete constituents)

  7. Shallow Parsing: POS Tagging The/Det woman/NN will/MD give/VB Mary/NNP a/Det book/NN

  8. Shallow Parsing: Chunking [The/Det woman/NN]NP[will/MD give/VB]VP[Mary/NNP]NP[a/Det book/NN]NP

  9. Shallow parsing: Sense Tagging [The/Det woman/NN]NP-PERSON[will/MD give/VB]VP[Mary/NNP]NP-PERSON[a/Det book/NN]NP-MATERIAL-OBJECT

  10. Shallow Parsing: Relation Finding person material-object person

  11. MBSP (Perl) Text In Tokenizer (Perl) MBT server POS Tagger TiMBL server Known words TiMBL server Relation Finder TiMBL server Unknown words MBT server Concept Tagger Timbl server Phrase Chunker TiMBL 5.0 MBT 2.0 http://ilk.uvt.nl/ TiMBL server Known words TiMBL server Unknown words

  12. Rule-Based Approach (syntax) • Deletion rules mark phrases for deletion based on shallow parser output • Rules for adverbs, adjectives, PNPs, subordinate sentences, interjections, ... • Phrases are deleted iteratively until target compression rate is met

  13. Example Rule ADJECTIVES: if(POS(word) == JJ && CHUNK(word) != ADJP-END && word-1 != most || least || more || less) {delete(word) if (word-1==CC && word-2==JJ) {delete(word-1)} elseif (word+1==CC && word+2==JJ) {delete(word+1)} } Adam 's [ only ] [ serious ] childhood illness had been measles The virus triggered an [1 extremely ]1 [2 rare [3 and ]2 fatal ]3 condition

  14. Relevance Measures (“semantics”) • Deletion rules suggest more deletions than necessary for reaching target compression • System rates the different possibilities and starts with deleting the least important phrases • Relevance measures in MUSA are based on (a weighted combination of) • Word frequencies (in BNC corpus) • Rule Probabilities (as encountered in parallel BBC corpus of transcripts with associated subtitles) • Word Durations (compare estimates with actual durations)

  15. Example • This is a basic summarizer for English used for demonstration purposes. • (NP This) is (NP a basic11 summarizer) (PNPfor English)10 used (PNPfor demonstration purposes)12. • This is a basic summarizer used for demonstration purposes • This is a summarizer used for demonstration purposes • This is a summarizer used

  16. Evaluation Data (Lumiere) • MMR - every parents choice • 243 segments • 39.5% of the segments need compression • Average target compression rate: 4.58 words, 1.98 chars • The Tranquiliser Trap • 287 segments • 50.52% of the segments need compression • Average target compression rate: 3.21 words, 2.0 chars

  17. Tranquilizer Trap (%) MMR Every parent’s choice (%) Syntax 85 87 Semantics 78 85 Compression Rate reached 85 84 Perfect 69 72 Human Evaluation

  18. Conclusions • We presented the Sentence Compression Module of the MUSA system • Eclectic system combining statistical techniques for relevance detection with handcrafted deletion rules based on shallow parser output • Evaluation suggests usefulness (with transcripts as input) • Future Work • Porting to other languages • Machine Learning of paraphrases

  19. Demos • Sentence Compression http://cnts.uia.ac.be/cgi-bin/anja/musa • MUSA demo http://sifnos.ilsp.gr/musa/demos

More Related