1 / 13

DIVINES SRIV Workshop

This study examines the impact of word detection variability on the performance of automatic audio indexing in course lectures. The goal is to provide searchable access to recorded lectures, but human annotation is expensive. Automatic speech recognition tools can facilitate search, but the variability in dialect, speaking style, recording conditions, and task domain pose challenges. Various techniques and evaluation metrics are explored to improve information retrieval performance.

jbooker
Download Presentation

DIVINES SRIV Workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DIVINES SRIV Workshop The Influence of Word Detection Variability on IR Performance in Automatic Audio Indexing of Course Lectures Saturday May 20, 2006 Richard Rose1, Renato Rispoli1, andJon Arrowood2

  2. Indexing Audio Lectures • Existing multimedia resources have the potential to make recorded University lectures and seminars accessible online to a wider audience • It is important that the audio lectures be searchable … • … but, human annotation of large corpora is expensive • Automatic Speech Recognition (ASR) based tools can be used to facilitate search of the un-transcribed audio material

  3. An Audio Search Tool for Course Lectures Synchronized Presentation Slides Text Query Term • Retrieved Segments from Lecture Audio Files • Click to listen to audio segment User Interface Developed by Nexidia

  4. Audio Indexing of Lectures - Motivation • Goal – Provide Disabled and Non-Disabled Students and Scholars Access to a Large Collection (thousands of hours) of Audio Lectures and Seminars • Multimedia – Permit Synchronization and Interpretation of audio with Lecture Slides and Video Content • Challenges – Large variability in dialect, speaking style, recording conditions, and task domain

  5. Issues in Audio Indexing • Acoustic – Extraction of query terms from audio • Must be extremely fast during search (>>1,000 X real-time) • Information Retrieval (IR) – Definition of relevance measure • Score query against hypothesized audio segment • Task Domain - Definition of the notion of relevance • When does relevant segment begin and end? • Evaluation Metrics • Acoustic: ASR word error rate, Keyword detection performance • IR: Precision / Recall of relevant segments • Task Domain: Increase in Productivity for the target user community

  6. Audio Indexing Task Domains • Several techniques have been applied to indexing of spoken audio in several task domains: • [Rose, 1991]: • Task: Topic Spotting from Conversational Speech • Method: Keyword spotting • [Foote et al, 1997]: • Task: Retrieval of multimedia mail messages (Video mail browser) • Method: Phone lattice based open vocabulary indexing • [Garofolo, 2000]: • Task: Spoken Document Retrieval (SDR) from Broadcast News • Method: Large vocabulary continuous speech recognition (LVCSR) • Course Lectures: • How to define a topic of interest? • How to segment a continuous lecture by topic? • How to define query terms and extract them from audio?

  7. A Preliminary Study of Audio Indexing • Phone Lattice-Based Search Engine • Off-line Lattice Generation (50 x real-time): • Obtain phonetic lattice from utterance (50 x real-time) • Search (100,000 x real-time): • Submit text based keyword queries, • Obtain phonetic expansion, • Find best match in phone lattice

  8. Evaluating Information Retrieval Performance • Database– Twelve hours of lectures from McGill ECE Photonics Course (Prof. Andrew Kirk) • Domain Experts– Course TA’s • Target Domain– Example questions taken from course material … • Sample question: “Explain the modal properties of a conducting waveguide from the point of view of destructive and constructive interference” • Relevance Labeling • Domain experts identify lecture segments that are relevant to question • A lecture segment is the audio that overlaps a given lecture slide

  9. Combine weighted posterior scores to obtain a measure of relevance for segment w.r.t. query Relevance Measure • Given an audio segment of length seconds, • For a Query containing query terms • Obtain hypothesized occurrences for term with acoustic posterior scores Hypothesized Occurrences of Term i Acoustic Scores for Query Term i Audio Segment k

  10. Relevance Measure - Normalization • Relevance Measure: • There are two normalization components: • Acoustic Confidence Normalization: • Function of the average Figure of Merit observed for query term • FOM: Average of the detection prob. over a range of false alarm rates • Document Length Normalization: • Estimate of the number of words in audio segment k • Relies on estimate of speaking rate: words/sec.

  11. Acoustic Variability • Impact of length of phonetic baseform on word detection performance • Figure of Merit vs. Baseform Length: • Word duration in phones: • Effect of word length in detection performance Prob. of Detection • Figure of Merit (FOM): Average over the range form 0 to 10 false alarms per keyword per hour

  12. Acoustic Variability • Impact of accuracy of phonetic baseforms on word spotting performance • Word pronunciation: • Comparison of 2 phonetic expansions of the word “dielectric” d ay l eh k t r ih k Prob. of Detection d iy l eh k t r ih k False Alarms per Keyword per Hour (FA/KW/HR)

  13. IR Performance • Define a relevance metric based on normalized frequency of occurrence of keywords chosen by domain experts • Rank segments of messages based relevance metric • Plot Results …

More Related