Assessment, Politics & Early Literacy. A Vygotskian Analysis of the DIBELS Literacy Assessment. Research Team. Sue Novinger & Amy Barnhill State University of New York at Brockport Nancy Knipping & Carol Gilles University of Missouri Carol Lauritzen & Ruth Davenport
Assessment, Politics & Early Literacy A Vygotskian Analysis of the DIBELS Literacy Assessment
Research Team • Sue Novinger & Amy Barnhill State University of New York at Brockport • Nancy Knipping & Carol Gilles University of Missouri • Carol Lauritzen & Ruth Davenport Eastern Oregon University
Aims • To examine the contexts in which the Dynamic Indicators of Early Literacy Skills assessment (DIBELS) is situated • To explore the ways DIBELS positions readers “at risk” • To examine the ways DIBELS influences teachers’ and children’s views of the reading process and of proficient reading
Theoretical Framework • Discourse • Reading First • Based on report of the National Reading Panel, which officially defines reading and reading instruction • Phonemic awareness, alphabetic principle, fluency with text, vocabulary, comprehension • Alignment of district policies and practices to federal guidelines
Dynamic Indicators of Basic Literacy Skills (DIBELS) • Automaticity model • Interactive model • Fluency
Tests as mediational means that enact relations of power • Tools encapsulate discursive truths • Normalize particular truths while marginalizing others • Surveillance, classification, distribution, governance • Power as a relationship to struggle
Methods • Two researchers at each site in the US: Northeast, Midwest, & Northwest • Participants • 32 third-grade students • 10 each from Northeast & Northwest sites • 12 from Midwest, all enrolled in Title I (program for low-income students)
Data Collection: DIBELS One-minute DIBELS Oral Reading Fluency (DORF) & Retelling Fluency (RTF) for three mid-year third-grade benchmark stories • Oral fluency scores: correct words per minute • Retelling scores: number of words in a one-minute retelling
Data Collection: QRI-4 Oral reading of graded word lists, graded narrative passages, retelling, questions • Total accuracy • Total acceptability • Correct words per minute • Miscues • Retelling: number of idea units • Percentage of comprehension questions answered correctly
Ratings • DIBELS • High Risk, Some Risk, Low Risk • QRI • Independent, Instructional, Frustration • Teachers • High, Average, Low
Data Collection: Interviews • Students • Views of reading process & themselves as readers • Responses to each test, possible improvements • Comparison of tests • Midwest Title I teacher who worked with students • Ways she uses DIBELS • Evaluation of DIBELS
Findings • DIBELS tends to position readers away from the middle while QRI tends to place them toward the middle. • Category labels contribute to positioning: • DIBELS assigns all readers to a risk category; • QRI labels according to independence with text.
DORF/RTF may misidentify: • Students who sound fluent, but do not comprehend well • Students who do not sound fluent, but who comprehend proficiently Cannot identify using only speed & accuracy
Assessment tools influence students’ views of the reading process, of proficient reading, and of themselves as readers. • DIBELS: rate and accuracy, not comprehension “How fast I can do the words” • QRI: overall ability as readers, sometimes comprehension “How good I can read”; “If you know what the paragraph means”; “I can answer questions about the story.”
-60% of students thought QRI told more about them as readers, primarily because they were able to finish the stories. “You didn’t stop me in the middle of the story, and you can see how I read the rest of the story.”
Overwhelming majority of children said they liked both tests • Wide variation in suggestions for improving tests (add pictures, make it funnier, add sports or mysteries, add/take out hard words) One child suggested not taking the tests: “Have people read any book they want to the teacher.”
-Students reproduced discursive knowledge as embodied by DIBELS, focusing on reading as speed and accuracy. -Students also resisted the dominant discourse, noting that they wanted to finish reading or not be required to take reading tests.
DIBELS has the potential to mediate teachers’ views of students and of the reading process. Interview with Annette, Title I teacher from the Midwest, who worked with the students who were tested
Annette accepted dominant discourse -Accepted concepts of grade level &risk level -Accepted fluency as a measure of overall reading proficiency -Used proficiency-monitoring graphs of students’ rate to motivate them to spend time reading
Annette resisted dominant discourse -Included prosody in her concept of fluency along with rate & accuracy Worked with students on what good reading sounds like -Used 1-minute retelling to gauge comprehension and taught comprehension strategies: -include main points in retelling -slow down to pick up important ideas -“share what they’ve read” in book talks
Annette’s messages for students: -Readers improve by spending time reading. -Reading is making meaning, and the more one reads, the faster one will be able to read.
-DIBELS has mediated Annette’s thinking about readers & reading on some points. -Annette draws on other discourses about reading and assessment on other points. -Evidence of a teacher trying to make meaning as she is immersed in multiple discourses.
Conclusions • DIBELS acts as a mediational means that: • Normalizes fluency (narrowed to rate & accuracy) as what counts as proficient reading • Influences children’s & teachers’ internalization of dominant discursive truths of reading, readers, assessment & instruction • Categorizes all children at some level of risk
Children and teachers draw on alternative discourses to critique and contest dominant discursive truths that would position them in limited and limiting ways.