1 / 1

Research questions

Using automated speech recognition to measuring scaffolding and learning effects of word identification interventions Joseph E. Beck, June Sison, and Jack Mostow Project LISTEN. www.cs.cmu.edu/~listen Carnegie Mellon University, Pittsburgh, PA. U.S.A. Funded by NSF. Research questions

Download Presentation

Research questions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using automated speech recognition to measuring scaffolding and learning effects of word identification interventions Joseph E. Beck, June Sison, and Jack Mostow Project LISTEN. www.cs.cmu.edu/~listen Carnegie Mellon University, Pittsburgh, PA. U.S.A. Funded by NSF • Research questions • How much instruction should be provided to teach students how to identify a word? • Does the benefit of more in depth treatments outweigh the additional time cost (compared to simple treatments)? • Methodology questions • Can we use automated speech recognition, rather than human transcription, as an experimental outcome measure? • Do ecological experimental outcomes work? • Experimental design • Before student reads a story, (sometimes) select 4 words student hasn’t seen before • Randomly assign words to treatment Randomization jobs • Treatment • HEAR: 3.6 sec • ECHO: 10.1 sec • READ: 18.1 sec • CONTROL: 0 sec room shoe jobs room jobs Outcome(s) (next day or later) 0 or more outcomes per treated word • Numbers • 451 students (ages  6 to 11) • Experiment fired 2627 times • 19062 outcomes • speech recognizer scored outcomes as read correctly 86.8% of time • mean duration from treatment to outcome was 52.2 days • Estimated marginal means • HEAR 88.1% • ECHO 85.8% • READ 86.8% • CONTROL 86.3% • (holding constant student’s overall ASR acceptance rate, word’s ASR acceptance rate, and days since treatment) • Analysis • Used logistic regression to determine which treatment beat control • Factors: user ID, treatment • Accounts for variable number of outcomes per student • Covariates: word’s ASR acceptance rate, days since treatment • Nagelkerke R2 = 0.133 • Treatment type significant at p=0.024 • HEAR is only treatment that differed significantly (p=0.019) from CONTROL • Conclusions • Expected READ > ECHO > HEAR (intuitive, and fits prior analysis using human transcription of post test) • Why didn’t results fit hypothesis?No appealing explanation: • Saying the word hurts students in learning it (unlikely) • Something is wrong with our outcome measure (what?) • Statistical fluke (with 19062 outcomes?)

More Related