1 / 27

Balancing Expressiveness and Simplicity in an Interlingua for Task Based Dialogue

This paper discusses the challenges in developing an interlingua for task-based dialogue that balances expressiveness and simplicity. It explores proposals for evaluating interlinguas, measuring coverage, reliability, and scalability.

stephanys
Download Presentation

Balancing Expressiveness and Simplicity in an Interlingua for Task Based Dialogue

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Balancing Expressiveness and Simplicity in an Interlingua for Task Based Dialogue Lori Levin, Donna Gates, Dorcas Wallace, Kay Peterson, Alon Lavie, Fabio Pianesi, Emanuele Pianta, Roldano Cattoni, Nadia Mana

  2. Outline • Overview of the Interchange Format (IF) • Proposals for Evaluating Interlinguas • Measuring coverage • Measuring reliability • Measuring scalability

  3. Multilingual Translation with an Interlingua Chinese (input sentence) San1 tian1 qian2, wo3 kai1 shi3 jue2 de2 tong4 French Italian Analyzers English German Japanese Catalan give-information+onset+body-state (body-state-spec=pain, time=(interval=3d, relative=before)) Korean Spanish Arabic Interlingua Arabic Spanish Catalan Korean Chinese (paraphrase) wo3 yi3 jin1 tong4 le4 san1 tian1 French Italian Generators Japanese English (output sentence) The pain started three days ago. German

  4. Advantages of Interlingua • Add a new language easily • get all-ways translation to all previous languages by adding one grammar for analysis and one grammar for generation • Mono-lingual development teams. • Paraphrase • Generate a new source language sentence from the interlingua so that the user can confirm the meaning

  5. Disadvantages of Interlingua • “Meaning” is arbitrarily deep. • What level of detail do you stop at? • If it is too simple, meaning will be lost in translation. • If it is too complex, analysis and generation will be too difficult. • Should be applicable to all languages. • Human development time.

  6. Speech Acts:Speaker intention vs literal meaning • Can you pass the salt? • Literal meaning: The speaker asks for information about the hearer’s ability. • Speaker intention: The speaker requests the hearer to perform an action.

  7. Domain Actions: Extended, Domain-Specific Speech Acts give-information+existence+body-state It hurts. give-information+onset+body-object The rash started three days ago. Request-information+personal-data What is your name?

  8. Domain Actions:Extended, Domain-Specific Speech Acts • In domain. • I sprained my ankle yesterday. • When did the headache start? • Out of Domain • Yesterday I slipped in the driveway on my way to the garage. • The headache started after my boss noticed that I deleted the file.

  9. Formulaic Utterances • Good night. • tisbaH cala xEr waking up on good • Romanization of Arabic from CallHome Egypt

  10. Same intention, different syntax • rigly bitiwgacny my leg hurts • candy wagac fE rigly I have pain in my leg • rigly bitiClimny my leg hurts • fE wagac fE rigly there is pain in my leg • rigly bitinqaH calya my leg bothers on me Romanization of Arabic from CallHome Egypt.

  11. Outline • Overview of the Interchange Format (IF) • Proposals for Evaluating Interlinguas • Measuring coverage • Measuring reliability • Measuring scalability

  12. Comparison of two interlinguas I would like to make a reservation for the fourth through the seventh of July. IF-1 (C-STAR II, 1997-1999) c:request-action+reservation+temporal+hotel (time=(start-time=md4, end-time=(md7,july))) IF-2 (NESPOLE, 2000-2002) c:give-information+disposition+reservation +accommodation (disposition=(who=I, desire), reservation-spec=(reservation, identifiability=no), accommodation-spec=hotel, object-time=(start-time=(md=4), end-time=(md=7, month=7, incl-excl=inclusive)))

  13. The Interchange Format Database 61.2.3 olang I lang I Prv IRST “telefono per prenotare delle stanze per quattro colleghi” 61.2.3 olang I lang E Prv IRST “I’m calling to book some rooms for four colleagues” 61.2.3 IF Prv IRST c:request-action+reservation+features+room (for-whom= (associate, quantity=4)) 61.2.3 comments: dial-oo5-spkB-roca0-02-3

  14. Comparison of four databases(travel domain, role playing, spontaneous speech) Same data, different interlingua • DB-1: C-STAR II English database tagged with IF-1 • 2278 sentences • DB-2: C-STAR II English database tagged with IF-2 • 2564 sentences • DB-3: NESPOLE English database tagged with IF-2 • 1446 sentences • Only about 50% of the vocabulary overlaps with the C-STAR database. • DB-4: Combined database tagged with IF-2 • 4010 sentences Significantly larger domain

  15. Outline • Overview of the Interchange Format (IF) • Proposals for Evaluating Interlinguas • Measuring coverage • Measuring reliability • Measuring scalability

  16. Measuring Coverage • No-tag rate: • Can a human expert assign an interlingua representation to each sentence? • C-STAR II no-tag rate: 7.3% • NESPOLE no-tag rate: 2.4% • 300 more sentences were covered in the C-STAR English database • End-to-end translation performance: Measures recognizer, analyzer, and generator performance in combination with interlingua coverage.

  17. Outline • Overview of the Interchange Format (IF) • Proposals for Evaluating Interlinguas • Measuring coverage • Measuring reliability • Measuring scalability

  18. Example of failure of reliability Input: 3:00, right? Interlingua: verify (time=3:00) Poor choice of speech act name: does it mean that the speaker is confirming the time or requesting verification from the user? Output: 3:00 is right.

  19. Measuring Reliability: Cross-site evaluations • Compare performance of: • Analyzer  interlingua  generator • Where the analyzer and generator are built at the same site (or by the same person) • Where the analyzer and generator are built at different sites (or by different people who may not know each other) • C-STAR II interlingua: comparable end-to-end performance within sites and across sites. • around 60% acceptable translations from speech recognizer output. • NESPOLE interlingua: cross-site end-to-end performance is lower.

  20. Intercoder agreement: average of percent agreeent pairwise

  21. Workshop on InterlinguaReliabilitySIG-IL • Association for Machine Translation in the Americas • October 8, 2002 • Tiburon, California • Submissions by July 21: • 500-1500 word abstract • (email to lsl@cs.cmu.edu) • Intent to participate in coding experiment

  22. Outline • Overview of the Interchange Format (IF) • Proposals for Evaluating Interlinguas • Measuring coverage • Measuring reliability • Measuring scalability

  23. Comparison of four databases(travel domain, role playing, spontaneous speech) Same data, different interlingua • DB-1: C-STAR II English database tagged with IF-1 • 2278 sentences • DB-2: C-STAR II English database tagged with IF-2 • 2564 sentences • DB-3: NESPOLE English database tagged with IF-2 • 1446 sentences • Only about 50% of the vocabulary overlaps with the C-STAR database. • DB-4: Combined database tagged with IF-2 • 4010 sentences Significantly larger domain

  24. Measuring Scalability: Coverage Rate What percent of the database is covered by the top n most frequent domain actions?

  25. Measuring Scalability: Number of domain actions as a function of database size • Sample size from 100 to 3000 sentences in increments of 25. • Average number of unique domain actions over ten random samples for each sample size. • Each sample includes a random selection of frequent and infrequent domain actions.

  26. Conclusions • An interlingua based on domain actions is suitable for task-oriented dialogue: • Reliable • Good coverage • Scalable without explosion of domain actions • It is possible to evaluate an interlingua for • Realiability • Expressivity • Scalability

More Related