1 / 19

Joke Daems joke.daems@ugent.be lt3.ugent.be/en/projects/robot Supervised by:

Two sides of the same coin assessing translation quality through adequacy and acceptability error analysis. Joke Daems joke.daems@ugent.be www.lt3.ugent.be/en/projects/robot Supervised by: Lieve Macken, Sonia Vandepitte, Robert Hartsuiker. What makes error analysis so complicated?.

ava-norman
Download Presentation

Joke Daems joke.daems@ugent.be lt3.ugent.be/en/projects/robot Supervised by:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Two sides of the same coinassessing translation quality through adequacy and acceptability error analysis Joke Daems joke.daems@ugent.be www.lt3.ugent.be/en/projects/robot Supervised by: Lieve Macken, Sonia Vandepitte, Robert Hartsuiker

  2. What makes error analysis so complicated? “There are some errors for all types of distinctions, but the most problematic distinctions were for adequacy/fluencyand seriousness.” – Stymne & Ahrenberg, 2012 • Does a problem concern adequacy, fluency, both, neither? • How do we determine the seriousness of an error?

  3. Two types of quality “Whereas adherence to source norms determines a translation'sadequacyas compared to the source text, subscription to norms originating in the target culture determines its acceptability.” - Toury, 1995  Why mix?

  4. 2-step TQA approach

  5. Subcategories

  6. Acceptability: fine-grained

  7. Adequacy: fine-grained

  8. How serious is an error? “Different thresholds exist for major, minor and critical errors. These should be flexible, depending on the content type, end-user profile and perishability of the content.” - TAUS, error typology guidelines, 2013  Give different weights to error categories depending on text type & translation brief

  9. Reducing subjectivity • Flexible error weights • More than one annotator • Consolidation phase

  10. TQA: Annotation (brat) 1) Acceptability 2) Adequacy

  11. Application example: comparative analysis

  12. Next step:diagnostic & comparative evaluation • What makes a ST-passage problematic? • How problematic is this passage really? (i.e.: how many translators make errors) • Which PE errors are caused by MT? • Which MT errors are hardest to solve?  Link all errors to corresponding ST-passage

  13. Source text-related error sets • ST: Changes in the environment that are sweeping the planet... • MT: Veranderingen in de omgeving die het vegen van de planeet tot stand brengen... (wrong word sense) "Changes in the environment that bring about the brushing of the planet..." • PE1: Veranderingen in de omgeving die het evenwicht op de planeet verstoren... (other type of meaning shift) "Changes in the environment that disturb the balance on the planet..." • PE2: Veranderingen in de omgeving die over de planeet rasen... (wrong collocation + spelling mistake) "Changes in the environment that raige over the planet..."

  14. Application example: impact of MT errors on PE

  15. Summary • Improve error analysis by: • judging acceptability and adequacy separately • making error weights depend on translation brief • having more than one annotator • introducing consolidation phase • Improve diagnostic and comparative evaluation by: • linking errors to ST-passages • taking number of translators into account

  16. Open questions • How can we reduce annotation time? • Ways of automating (part) of the process? • Limit annotation to subset of errors? • How to better implement ST-related error sets? • Ways of automatically aligning ST, MT, and various TT’s at word-level?

  17. Thank you for listening For more information, contact: joke.daems@ugent.be Suggestions? Questions?

  18. Quantification of ST-related error sets

  19. Inter-annotator agreement

More Related