1 / 26

The Calibration Study

Partnership for Accessible Reading Assessment (PARA): Calibration & Motivation Studies presentation to the Technical Advisory Committee October 11, 2007 Deborah Dillon & David O’Brien University of Minnesota. The Calibration Study. Calibration Study.

jafari
Download Presentation

The Calibration Study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Partnership for Accessible Reading Assessment (PARA):Calibration & Motivation Studiespresentation to the Technical Advisory Committee October 11, 2007Deborah Dillon & David O’BrienUniversity of Minnesota

  2. The Calibration Study

  3. Calibration Study The purpose of the study is to scale or calibrate the measurement tools that will be used in a large-scale accessible reading assessment for students with disabilities. This process allows investigators to empirically determine the comparability of passages and items used in the reading assessment study by placing all passages and questions on a common IRT (item response theory) -based equal-interval measurement scale.

  4. Research Questions 1. What is the difficulty of each reading passage (based on a passage total score, which, in turn, is based on performance on all passage comprehension items/questions) and each comprehension item/question? • How well can the reading passages be placed on a common interval measurement scale to allow scores from different passages (of equal or unequal difficulty) to be compared and equated? 3. Based on IRT item fit statistics, what multiple choice items should be retained and which should be eliminated? 4. Which reading passages do students prefer to read?

  5. Participants A representative total sample of 600 students • 300 from grades 3-5 (100 3rd graders, 100 4th graders, 100 5th graders) in 12-16 intact classrooms • 300 students from grades 7-9 (100 7th graders, 100 8th graders, 100 9th graders) in 12-16 intact classrooms. Students representing the full range of reading ability, including students with disabilities are included in the study

  6. Design: Steps in the Calibration Process • Selected 40 passages, including 10 literary-fiction and 10 informational-exposition texts for each grade level (4th and 8th); the passages were rated as easy, medium, and hard in difficulty. • Commissioned the writing of 10 items for each passage, using the 2009 NAEP Reading Framework cognitive targets .

  7. Visual Item Analysis & Test Booklet Layout • Criteria were developed to make decisions about the inclusion or exclusion of visual items within passages; multiple raters judged all visual items • criteria were developed from studies of readability and document analyses and universal design applied to large-scale assessments to inform test layout

  8. Design • Testing procedures were employed to assure representation of passage text types while removing order effects • Within classes students will be assigned to one of 16 possible test forms (a form is a set of passages with counterbalanced passage order) • The test includes 10 anchor passages (included in all forms), and 10 non-anchor passages, from which five are selected and included in each form.

  9. Passage combinations Table 1 Distribution of 10 anchor passages

  10. Design-cont. • After reading each passage, students will answer 10 multiple choice questions representing a range of cognitive targets (NAEP 2009) based on each passage. • As students work through the assessment, they will be asked questions related to their motivation to read (Situated Motivation Questions). • The reading test will require 4 sessions of approximately 60 minutes each. During each session, students will read 3-4 passages and respond to the accompanying items/questions.

  11. Experimental Design and Analysis This preliminary item/passage psychometric calibration study will allow for: • the placement of all passages/questions on a common equal-interval measurement scale, • the development of passage scoring tables by which to assign subjects reading “ability” scores, and • provision of a mechanism for equating scores across different passages. This “item fit analysis” will determine which items will be retained and those that will be eliminated.

  12. The Motivation Study

  13. Motivation Study Purpose: To examine whether improving the motivational characteristics of a large-scale reading assessment increases its accessibility for students with disabilities, and in so doing provides a more valid assessment of these students’ reading proficiency due to their increased engagement.

  14. Research Questions • Is there an interaction effect between choice, type of text, and type of student? • Is there a correlation between students’ general motivation to read (e.g., as measured by the Motivation to Read Questionnaire [MRQ]) and their performance on a large-scale reading assessment? Are participants who are more motivated to read (as measured by the MRQ), more likely to benefit from the choice option on a large scale reading assessment?

  15. Research Questions—cont. • Does the option of exercising choice in the selection of reading comprehension passages, which is hypothesized to improve student motivation and engagement on a large-scale assessment, produce significantly higher measured reading comprehension for all students? • Is there a significant difference in reading scores of students with disabilities versus general education students on large-scale reading assessments? • Is there a significant difference in student performance on text type (literary-fiction versus informational-exposition passages) on large-scale reading assessments?

  16. Participants 280 students who are fluent in English • 140 students from 4th grade • 140 students from 8th grade • targeted samples of students representing a range of disability groups are included • students will be placed in a treatment condition based on stratified random assignment (i.e., students representing particular disabilities will be randomly assigned to the experimental and control conditions).

  17. Design: Components of the Test • The motivation assessment includes 2 literary-fiction and 2 informational-expository passages for both grade 4 & grade 8; passage order will be randomly assigned. • Each passage will be followed by 5-6 multiple choice items. • The assessment is untimed and will be completed on a computer-based platform.

  18. Attending to Issues of Motivation • General motivation will be measured prior to the test to obtain information on students’ feelings about “self as reader” (e.g., Motivation for Reading Questionnaire-MRQ). • Situated motivation will be measured using questions woven into the test booklets for the choice and no-choice conditions (placed after the comprehension items); specific questions will tap • students’ perceptions of the texts they read (e.g., difficulty; interest), and • students’ sense of self-efficacy in reading and completing the items following the passage (the task).

  19. Design A counterbalanced stratified random assignment design will be used with experimental choice (C) groups that select reading passages for the assessment (“design your own assessment”) and control no choice (NC) groups that do not select passages

  20. Design: Procedures Students in the experimental group are given choice (C) in selecting the passages they read in comparison to students in a control group who are not given choice in selecting passages (NC). • students in the (C) & (NC) condition read short descriptions for 6 informational-exposition and 6 literary-fiction passages; • they rate the passages according to interest; • students in the (C) condition select 2 passages from each genre to create their “own personal assessment.”

  21. Design: Procedures—cont. • Post-assessment interviews will be conducted with subsets of students from the control and experimental groups at both grade levels. • Students from the various disabilities groups as well as regular education students will be selected for interviews (16 students from 4th grade and 16 from 8th grade)

  22. Analysis • The dependent measure is comprehension performance (Y); factors include choice condition (choice/ no choice), disability status (youth with disabilities/ youth without disabilities) & text type (literary-fiction/informational-exposition) • A split-plot design will be used with two between-subjects factors (A = passage choice & B = disability status), one within-subjects factor (C = text type), one blocking variable (S = subject), & one covariate (X = motivation as assessed on the MRQ)at the between-subject level; A, B, C, and X are fixed effects, and S is a random effect

  23. Data structure Note. Y = reading score.

  24. Analysis—cont. • Analysis of variance will be used to evaluate various effects; correlations of students’ performance on the comprehension test & responses on the MRQ and situated motivation questions will be calculated • Various analytic deduction approaches will also be used to analyze the post assessment interview data and a mixed-design approach will be used to integrate the overall quantitative and qualitative findings.

  25. Questions to Discuss • The motivation study was designed to draw on several powerful constructs of motivation including choice of topic/selecting the passage one wants to read. What more could we add to the study design to increase the motivational aspects of the assessment and yet still be able to determine what impacts performance on a comprehension assessment? • We are considering several options for passage order for the Choice condition (vs. random assignment). After students select passages should we (a) order them from easiest to most difficult, or (b) let students read their selected passage in an order that appeals to them. What are your thoughts about these options? Are there other suggestions that might increase the motivational aspects of the design?

  26. Questions—cont. • We are still concerned about the placement of the embedded situational motivation questions in the assessment. Should they be placed immediately after each passage? after 2 passages? at the end of the test (after the 4th and final passage)? • It has been suggested that the Post-assessment interview protocol could be improved with Likert scales that lead to quantification of responses. Is this a suggestion we should follow?

More Related