1 / 36

European Language Resources Association (ELRA)

European Language Resources Association (ELRA). HLT Evaluations. Khalid CHOUKRI ELRA/ELDA 55 Rue Brillat-Savarin, F-75013 Paris, France Tel. +33 1 43 13 33 33 -- Fax. +33 1 43 13 33 30 Email: choukri@elda.org http://www.elda.org/ or http://www.elra.info/.

garth
Download Presentation

European Language Resources Association (ELRA)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. European Language Resources Association (ELRA) HLT Evaluations Khalid CHOUKRI ELRA/ELDA 55 Rue Brillat-Savarin, F-75013 Paris, France Tel. +33 1 43 13 33 33 -- Fax. +33 1 43 13 33 30 Email: choukri@elda.org http://www.elda.org/ or http://www.elra.info/

  2. European language Resources Association Evaluation to drive research progress Human Language Technologies Evaluation(s) What, why, for whom, how …. (Some figures from TC-STAR) Examples of Evaluation campaigns Demo …(available afterwards) Presentation Outline

  3. European Language Resource Association An Improved infrastructure for Data sharing & HLT evaluation • An Association of users of Language Resources • Infrastructure for the evaluation of Human Language Technologies providing resources, tools, methodologies, logistics,

  4. The Association • Membership Drive: • ELRA is Open to European & Non-European Institutions • Resources are available to Members & Non-Members • Pay per Resource • Some of the benefits of becoming a member: • Substantial discounts on LR prices (over 70%), • Substantial discountson LREC registration fees • Legal and contractual assistance with respect to LR matters • Access to Validation and production manuals (Quality assessment) • Figures and facts about the Market (results of ELRA surveys) • Newsletter and other publications • ……………. New: Fidelity program … earn miles and get more benefits

  5. 2005- Extension of ELRA’sofficial mission to promote LRs and evaluation for the Human Language Technology (HLT): The mission of the Association is to promote language resources (henceforth LRs) and evaluation for the Human Language Technology (HLT) sector in all their forms and all their uses; ELRA: An efficient infrastructure to serve the HLT CommunityStrategies for the next Decade … New ELRA status:

  6.  Courtesy G.Thurmair Malta workshop.

  7. What to evaluate … Levels of Evaluation Usage Evaluation Meeting points with technology development Quantitative Evaluation Basic Research Technology Development Application Development Technologies necessitated for applications Bottleneck Identification Research results in quantitative evaluation Technologies which have been validated for applications. Long term / high risk Large return of investment Usability Acceptability Evolutionary

  8. What to evaluate … Levels of Evaluation • Basic Research Evaluation (validate research direction) • Technology Evaluation (assessment of solution for well defined problem) • Usage Evaluation (end-users in the field) • Impact Evaluation (socio-economic consequences) • Programme Evaluation (funding agencies) Our concern

  9. Validate research hypotheses Assess progress Choose between research alternatives Identify promising technologies (market) Benchmarking … state of the art Share knowledge … dedicated workshops Feedback … Funding agencies Share Costs ??? Why Evaluate?

  10. Progress & Evaluation (Courtesy Charles Wayn)

  11. Bad technology may be used to design useful applications What about good technology ? …. Software industry Technology performance & Applications

  12. MT developers want to improve the “quality” of MT output MT users (humans or software e.g. CLIR ) want to improve productivity using the most suitable MT system (e.g. multilinguality) …. HLT Evaluations …. For whom • Basic Research Evaluation (validate research direction) • Technology Evaluation (assessment of solution for well defined problem) • Usage Evaluation (end-users in the field) • Impact Evaluation (socio-economic consequences) • Programme Evaluation (funding agencies)

  13. For whom … essential for technology development • Share of Information and knowledge between participants: (how to get the best results, access to data, scoring tools) • Information obtained by industrialists: state of the art, technology choice, market strategy, new products. • Information obtained by funding agencies: technology performance, progress/investment, priorities

  14. Some types of evaluations • Comparative evaluation • the same or similar control tasks and related data with metrics that are agreed upon • Competitive vs Cooperative • Black box evaluation … Glass box • Objective evaluation … Subjective (Human-based) • Corpus based (test suites) • Quantitative measures … Qualitative

  15. Comparative Evaluation of Technology • Used successfully in the USA by DARPA and NIST (since 1984) • Similar efforts in Europe on a smaller scale, mainly projects (EU funded or national programs) • Select a common "task" • Attract enough Participants • Organize the campaign (protocol/metrics/data) • Follow-up workshop, interpret results and share info

  16. Referencial Language Resources (Data) (truth) Metric(s): Automatic, Human judgments … scoring software scale/range of performance to compare with (Baseline) Logistics’ Management reliability assessment: independent body Participants: technology providers Requirements for an evaluation campaign

  17. Overview HLT Evaluations Activities by technology Activities by geographical region Players Evaluation resources Evaluation Services HLT Evaluation Portal… Pointers to projects http://www.hlt-evaluation.org/ Let us list some well known campaigns

  18. Examples of Evaluation Campaigns – Capitalization • Speech & Audio/sound • ASR: TC-STAR, CHIL, ESTER • TTS: TC-STAR, EVASY • Speaker identification (CHIL) • Speech 2 Speech Translation • Speech Understanding (Media) • Acoustic Person tracking • Speech activity detection, ….. • ………

  19. Examples of Evaluation Campaigns – Capitalization • Multimodal --- Video – Vision technologies • Face Detection • Visual Person Tracking • Visual Speaker Identification • Head Pose Estimation • Hand Tracking

  20. A) Vision technologies A.1) Face Detection A.2) Visual Person Tracking A.3) Visual Speaker Identification A.4) Head Pose Estimation A.5) Hand Tracking B) Sound and Speech technologies B.1) Close-Talking Automatic Speech Recognition B.2) Far-Field Automatic Speech Recognition B.3) Acoustic Person Tracking B.4) Acoustic Speaker Identification B.5) Speech Activity Detection B.6) Acoustic Scene Analysis C) Contents Processing technologies C.1) Automatic Summarisation … Question Answering Some of the technologies being evaluated within CHIL …http://chil.server.de/ more at the CHIL/CLEAR workshops

  21. Examples of Evaluation Campaigns – Capitalization • Written NLP & Content • IR, CLIR , QA, (Amaryllis, EQUER, CLEF) • Text analysers (Grace, EASY) • MT (CESTA, TC-STAR) • Corpus alignement & processing (Arcade, Arcade-2, Romanseval/Senseval, …) • Term & Terminology extraction • Summarisation

  22. Technolangue/Evalda: the Evalda platform consists of 8 evaluation campaigns with a focus on the spoken and written language technologies for the French language: ARCADE II: evaluation of bilingual corpora alignment systems. CESART: evaluation of terminology extraction systems. CESTA: evaluation of machine translation systems (Ar, Eng => Fr). EASY: evaluation of parsers. ESTER: evaluation of broadcast news automatic transcribing systems. EQUER: evaluation of question answering systems. EVASY: evaluation of speech synthesis systems. MEDIA: evaluation of in and out-of context dialog systems. Evaluation Projects …. The French sceneSome projects in NL, Italy, ...

  23. CLEF TC-STAR Some details from relevant projects

  24. Example of Evaluation Initiatives • CLEF (Cross-Language Evaluation Forum) • Promoting research and development in Cross-Language Information Retrieval (CLIR) • (i) providing an infrastructure for the testing and evaluation of information retrieval systems - European languages - monolingual and cross-language contexts • (ii) creating test packages of reusable data which can be employed by system developers for benchmarking purposes.

  25. QA-CLEF: State of the art & Improvement

  26. 2 categories of transcribing and translatingtasks European Parliament Plenary Sessions: (EPPS): English (En) and Spanish (Es), Broadcast News (Voice of America VoA): Mandarin Chinese (Zh) and English (En) Back to Evaluation Tasks within TC-STAR (http://www.tc-star.org/) • TC-STAR: Speech to speech translation • Packages with Speech recognition, speech translation, and speech synthesis • Development and Test data, metrics & results.

  27. TC-STAR evaluations……. 3 Consecutives annual evaluations • SLT in the following directions • Chinese-to-English (Broadcast News) • Spanish-to-English (European Parliament plenary speeches) • English-to-Spanish (European Parliament plenary speeches) • ASR in the following languages • English (European Parliament plenary speeches) • Spanish (European Parliament plenary speeches) • Mandarin Chinese (Broadcast News) • TTS in Chinese, English, and Spanish under the following conditions: • Complete system • Voice conversion intralingual and crosslingual, expressive speech: • Component evaluation

  28. Input = Text , Verbatim Speech recognition Improvement of SLT Performances (EnEs)

  29. Improvement of SLT Performances (EsEn) Input = Text , Verbatim Speech recognition

  30. Improvement of ASR Performances (EnEs)

  31. Human Evaluation Translations … EnEs adequacy (1-5) Commercial Combinations

  32. The end-to-end evaluation is carried out for 1 translation direction: English-to-Spanish Evaluation of ASR (Rover) + SLT (Rover) +TTS (UPC) system Same segments as for SLT human evaluation Evaluation tasks: Adequacy: comprehension test Fluency: judgement test with several questions related to fluency and also usability of the system End-to-End

  33. [Understanding] Do you think that you have understood the message? 1: Not at all , ...........5: Yes, absolutely [Fluent Speech] Is the speech in good Spanish? 1: No, it is very bad ...... 5: Yes, it is perfect [Effort] Rate the listening effort 1: Very high ............ 5: Low, as natural speech [Overall Quality] Rate the overall quality of this audio sample 1: Very badm unusable ...... 5: It is very useful Fluency questionnaire

  34. End to End results(subjective test: 1…5 )

  35. More results from the 2007 Campaign http://www.tc-star.org/ Evaluation packages available  TC-STAR Tasks

  36. It saves developers time and money It help assess progress accurately It produces reusable evaluation packages It helps to identify areas where more R&D is needed Some concluding remarks on Technology evaluation

More Related