1 / 11

Evaluating the Effect of Predicting Oral Reading Miscues

Evaluating the Effect of Predicting Oral Reading Miscues. Satanjeev Banerjee, Joseph Beck, Jack Mostow Project LISTEN (www.cs.cmu.edu/~listen) Carnegie Mellon University Funding: NSF IERI. Why Predict Miscues?. Reading Tutor helps children learn to read.

taima
Download Presentation

Evaluating the Effect of Predicting Oral Reading Miscues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating the Effect of Predicting Oral Reading Miscues Satanjeev Banerjee, Joseph Beck, Jack Mostow Project LISTEN (www.cs.cmu.edu/~listen) Carnegie Mellon University Funding: NSF IERI

  2. Why Predict Miscues? • Reading Tutor helps children learn to read. • Speech recognizer listens for miscues(reading errors) • E.g.: listen for “hat” if sentence to be read has word “hate” • Accurate miscue prediction helps miscue detection.

  3. Real Word Substitutions • Miscues = substitutions, omissions, insertions • Real word substitution = misread target word as another word • E.g. read “hat” instead of “hate” • Most miscues are real word substitutions • ICSLP-02: predicted real word substitutions • Here: evaluate effect on substitution detection

  4. # substitutions detected Substitution detection rate = # substitutions child made 1 1 4 2 How Evaluate Substitution Detection? substitution substitution undetected false alarm substitution detected = # false alarms False alarm rate = = # words correctly read

  5. Evaluation Data • Sentences read by 25 children aged 6 to 10

  6. Rote Method • Uses the University of Colorado miscue database. • For each target word • Sort substitutions by # children who made them. • Predict that the top n substitutions will reoccur, for this word.

  7. Extrapolative Method • Predict the probability that a word is a likely substitution for another word • Pr ( substitution “hat” | target “hate”) • Use machine learning to induce a classifier • Train using University of Colorado miscue database.

  8. Extrapolative Method cont’d Given a target word, predict substitution if Pr ( substitution candidate | target word ) > threshold

  9. Combining Rote and Extrapolative • Aim: Get n substitutions for a given word. • Step 1: Use top n substitutions from rote. • Step 2: If rote predicts k substitutions, k < n, • Then add top n – k substitutions from extrapolative. • Intuition: rote is more accurate, so use when available. If not available, fall back on extrapolative.

  10. Results from Combining Algorithms Truncation = The first 2 to n-2 phonemes of a word – models false starts. [/K AE/ and /K AE N/ for /K AE N D IY/; none for “hate”] Theoretical max = use only those miscues the child actually made.

  11. Conclusion • Evaluated effect on substitution detection of • Two previously published algorithms • A combination of the two algorithms. • Combined approach improved on current configuration (truncations) by • Reducing false alarms by 0.52% abs (12% rel) • Increasing miscue detection by 1.04% (4.2% rel) • Take-home sound byte: Listening for specific reading mistakes can help detect them!

More Related