Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
“Decodability” Can BeHard to Define • Depends on units of analysis! • Graphemes: “OO” -> /u/ • cook, look, book, shook become exceptions,while spook is regular • Rhymes: “OOK” -> /^k/, “OOL” -> /ul/ • cook, book, cool, fool are regular, spook is an exception • Oncleus? Wholey’s Fish Market
A Hypothesis... • Children learning to read attend to regularities at multiple levels of granularity • Learning to read involves sorting out these different regularities Predicts that the COMPOSITION of the early texts STRONGLY influences WHAT is learned!
Why Use A Computational Model? • An independent way to assess decodability • Can experimentally manipulate composition of learning materials, independent of other factors
Begin by modeling pre- literate phonological knowledge that children have Phonological Knowledge The Harm & Seidenberg (1999)Model of Reading Can vary the strength and consistency of this knowledge … and simulate the different degrees of phonological ability children bring to bear learning to read
The model must map print onto this structured phonological representation to read aloud Phonological Knowledge Text Then, Teach it To Read
Key Results • Poor phonological representations influenced reading acquisition: • Slower learning of words • Worse performance on nonword generalization • See Harm & Seidenberg (1999), Psychological Review 106(3) for details
The Next Step • In the HS99 simulation, words were presented randomly, according to frequency distribution of adult text • Next: Test model using actual text from Children’s basals • Comparison: • Basal 1: Less tuned to spelling-sound patterns; emphasizes variety in text • Basal 2: Emphasizes overlapping spelling-sound patterns (Sam I am, I like ham)
Method • Conduct 10 simulations for each of the two basals • Present words to simulations in exact order they appear in basal • Assess performance of each set of simulations
Measure 1: Acquisition of Words • Test each simulation at the completion of training on pronunciations of 6,000 monosyllables Basal 2 (tuned to overlap in spelling and sound) outperformed basal 1
Measure 2: Generalization to Nonwords • Test each simulation at the completion of training on pronunciations of 86 nonwords (from Glushko ‘79) Basal 2 (tuned to overlap in spelling and sound) again outperformed basal 1
Measure 3: Decoding Novel Items In Text • How often can a word be decoded by the simulation the first time it was encountered? Basal 2 (tuned to overlap in spelling and sound) again outperformed basal 1
Future Directions • How does phonological impairment interact with basal composition? • How do differing remediation strategies interact with basal composition? • What exactly is it about Basal 2 that makes it easier for the model to learn from? • Can we design better basals from insights gained by examining the model’s performance?