1 / 26

Tatsuya Kawahara (Kyoto University, Japan)

Transcription System using A utomatic S peech R ecognition (ASR) for the Japanese Parliament (Diet). Tatsuya Kawahara (Kyoto University, Japan). Brief Biography. 1995 Ph. D. (Information Science), Kyoto Univ. 1995 Associate Professor, Kyoto Univ.

mhannah
Download Presentation

Tatsuya Kawahara (Kyoto University, Japan)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transcription System using Automatic Speech Recognition (ASR) for the Japanese Parliament (Diet) Tatsuya Kawahara (Kyoto University, Japan)

  2. Brief Biography • 1995 Ph. D. (Information Science), Kyoto Univ. • 1995 Associate Professor, Kyoto Univ. • 1995-96 Visiting Researcher, Bell Labs., USA • 2003- Professor, Kyoto Univ. • 2003-06 IEEE SPS Speech TC member • 2006- Technical Consultant, The House of Representative, Japan Published 150~ papers in automatic speech recognition (ASR) and its applications Web http://www.ar.media.kyoto-u.ac.jp/~kawahara/

  3. Contents • Review of ASR technology • ASR system for the Japanese Diet • Next-generation transcription system of the Japanese Diet

  4. Trend of ASR Informal Phone conversation Business meetings Classroom lectures Spontaneous speech Parliament style Formal presentation Broadcast news Reading/ Re-speaking Formal one multiple Number of speakers

  5. Review of ASR technology (1/2) • Broadcast News [world-wide] • Professional anchors, mostly reading manuscripts • Accuracy over 90% • Public speaking, oral presentations [Japan] • Ordinary people making fluent speech • Accuracy ~80% (close-talking mic.) • Classroom lectures [world-wide] • More informal speaking • Accuracy ~60% (pin mic.)

  6. Review of ASR technology (2/2) • Telephone conversations [US] • Ordinary people, speaking casually • Accuracy 60%85% • Business meetings [Europe/US] • Ordinary people, speaking less formally • Accuracy 70% (close mic.), 60% (distant mic.) • Parliamentary meetings[Europe/Japan] • Politicians speaking formally • EU: plenary sessions: 90% • Japan: committee meetings: 85%

  7. Deployment of ASRin Parliaments & Courts • Some countries • Steno-mask & Voice writing • Re-speaking  Commercial dictation software • Some local autonomies in Japan • Direct recognition of politicians’ speech • Japanese Courts • ASR for efficient retrieval from recorded sessions • Japanese Parliaments (=Diet) • to introduce ASR; direct recognition of politicians’ speech • Mostly in committee meetings …interactive, spontaneous, sometimes excited

  8. Language-specific Issuesin Japanese • Need to convert kana (phonetic symbol) to kanji • Conversion ambiguous  many homonym (ex.) KAWAHARA (カワハラ) → 河原 (not 川原) • Very hard to type-in real-time • Only limited stenographers using special keyboards can • Difference in verbatim-style and transcript-style (ex.) おききしたいのですが  ききたい(のです) • Re-speaking is not so simple • need to rephrase in many cases

  9. ASR Architecture Signal processing Depend on input condition X P(X/W) Acoustic model Recognition Engine (decoder) P(W/X) ∝ P(W)・P(X/W) P(X/P) /a, i, u, e, o…/ Dictionary P(P/W) 京都 ky o: t o P(W) Language model P(W) 京都+の+天気 output: W=argmax P(W/X) Depend on application

  10. Current Status of ASR • Problems unsolved • Spontaneous/conversational speech • Noisy environments • Including distant microphones • Solutions ad-hoc • Collect large-scale “matched” data (corpus) • Same acoustic environment, speakers (10hours~) • Cover same topics, vocabulary (~M words) • Prepare dedicated acoustic & language models • Huge cost in development & maintenance

  11. Contents • Review of ASR technology • ASR system for the Japanese Diet • Next-generation transcription system of the Japanese Diet

  12. ASR Research in Kyoto Univ. • Since 1960s, one of the pioneers • Development of free software Julius • Research in spontaneous speech recognition • 1999- Oral presentations • 2001- TV discussions • 2004- Classroom lectures • 2003- Parliamentary meetings

  13. Free ASR Software: Julius • Developed since 1997 in Kyoto-U & other sites • Open-source  multi-platform (Linux, Mac, Windows, iPhone) • Open architecture • Independent from acoustic & language models  Ported to many languages  Ported to many applications (telephony, robot…) • Standard model for Japanese • Widely-used research platform http://julius.sourceforge.jp

  14. Corpus of Parliamentary Meetings {えー}それでは少し、今{そのー}最初に大臣からも、{そのー}貯蓄から投資へという流れの中に{ま}資するんじゃないだろうかとかいうような話もありましたけれども、{だけど/だけれども}、{まあ}あなたが言うと本当にうそらしくなる{んで/ので}{ですね、えー}もう少し{ですね、あのー}これは{あー}財務大臣に{えー}お尋ねをしたいんです{が}。 {ま}その{あの}見通しはどうかということでありますけれども、これについては、{あのー}委員御承知の{その}「改革と展望」の中で{ですね}、我々の今{あのー}予測可能な範囲で{えー}見通せるものについてはかなりはっきりと書かせていただいて(い)るつもりでございます。 • Cover all major committees and plenary sessions • 200 hours, 2.4M words • Faithful transcripts of utterances including fillers, which are aligned with official minutes

  15. ASR modules oriented forSpontaneous Speech Signal processing Corpus X P(X/W) Acoustic model Cover poor articulation Recognition Engine (decoder) P(W/X) ∝ P(W)・P(X/W) Cover pronunciation variations Dictionary P(W) Cover disfluencies & colloquial expressions Language model Innovative techniques

  16. ASR Performance • Accuracy • Word accuracy 85% (Character accuracy 87%) • Plenary sessions 90% • Committee meetings 80~87% • 90% seems almost perfect • No commercial software can achieve!! • Real-time factor 1-3 • Latency in 10 min.

  17. Related Techniques • Noise suppression & dereverberation • Not serious once matched training data available • Speaker change detection • Preferred • Current technology level seems not sufficient • Auto-edit • Filler removal  easy • Colloquial expression replacement  non-trivial • Period insertion  still research stage

  18. Contents • Review of ASR technology • ASR system for the Japanese Diet • Next-generation transcription system of the Japanese Diet

  19. The House of Representatives in Japan • 2005: terminated recruiting stenographers • 2006: investigated ASR technology for the new transcription system • 2007: developed a prototype system and made preliminary evaluations • 2008: system design • 2009: system implementation • 2010: trial and deployment

  20. ASR system: Kyoto Univ. model integrated to NTT engine Signal processing X P(X/W) Acoustic model Recognition Engine (decoder) P(W/X) ∝ P(W)・P(X/W) P(X/P) /a, i, u, e, o…/ Dictionary P(P/W) 京都 ky o: t o P(W) Language model P(W) 京都+の+天気 NTT Corp. Kyoto Univ. House

  21. Issues in Post-Editor • For efficient correction of ASR errors and cleaning transcript into document-style • Easy reference to original speech (+video) • by time, by utterance, by character (cursor) • Can speed up & down speech-replay • Word-processor interface (screen editor); not line editor • to concentrate on making correct sentences • Serious misunderstanding between system developers and stenographers!!

  22. Type from scratch 10 Post-edit ASR output 9 8 7 6 edit time (min) 5 4 3 50 55 60 65 70 75 80 85 90 95 ASR accuracy System Evaluation (@Kyoto) • Subjects:18 students • Post-editing ASR outputs is more efficient than typing from scratch, regardless of the accuracy  Those hard for ASR are also hard for human

  23. 7 6 5 4 Usability score of ASR 3 2 1 50 55 60 65 70 75 80 85 90 95 ASR accuracy System Evaluation (@Kyoto) • Subjective evaluation correlates with ASR accuracy • Threshold in 75% to have ASR preferred

  24. System Evaluation (@House) • Subjects: 8 stenographers • System: proto-type • ASR-based system reduced the edit time, compared with current short-hand system • 78 min.  68 min. (for 5 min. segment) • Threshold in ASR accuracy of 80% • 75% degradation in edit time; a half say negative in using ASR

  25. Side effect of ASR-based system • Everything (text/speech/video) digitized and hyper-linked  Efficient search & retrieval • Less burden?  may work on longer segments?? • Significantly less special training needed compared with current short-hand system

  26. Conclusions • ASR of parliamentary meetings is feasible, given a large collection of data • ~100 hour speech • ~1G word text (minutes) • Accuracy 85-90% • Effective post-processing is still under investigation • Automatic translation research is also ongoing

More Related