1 / 12

The Examination of Equivalence and Equating First-Grade DIBELS ORF

The Examination of Equivalence and Equating First-Grade DIBELS ORF. Chung-Hau Fan The University of Iowa. Research Purposes. Examine DIBELS 1 st grade ORF probes’ equivalence Establish equivalent scaling for raw scores to facilitate comparison of non-equivalent passages. .

tacy
Download Presentation

The Examination of Equivalence and Equating First-Grade DIBELS ORF

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Examination of Equivalence and Equating First-Grade DIBELSORF Chung-Hau Fan The University of Iowa

  2. Research Purposes • Examine DIBELS 1st grade ORF probes’ equivalence • Establish equivalent scaling for raw scores to facilitate comparison of non-equivalent passages.

  3. Sample & Procedure • N = 49, first graders from two Midwestern schools. • All first graders within school invited; no selection criteria other than consent. • 20 progress monitoring passages were given in a random order across 4 days at the end of the school year.

  4. Data Analysis (1) • Confirmatory factor analysis (CFA, Bollen, 1989) was used for examining probe equivalence. • The general congeneric model and the parallel measurements models were tested.

  5. Data Analysis (2) • Linear equating methods (Kolen & Brennan, 2004) were used to equate WCPM scores across passages. • The CFA was re-run using the equated scores to evaluate the extent to which the transformation provided equivalent measurement. • Data for two students were graphed for visual analysis.

  6. Results (1) • The largest average difference between probes was found for probe #16 (70) and probe #18 (90); a difference of 20 WCPM. • The average standard error of measurement (SEM) of the 20 probes was 5.1, ranging from 4.5 to 5.8.

  7. Results (2) Model fit indices for the measurement models on the raw and rescaled data The fit indices: Comparative Fit Index (CFI) and Root Mean Square Error of Approximation (RMSEA).

  8. Results (3) • The row scores column suggested the common factor model fit the data better (CFI= .95; RMSEA= .13) than the parallel model (CFI= .91; RMSEA= .15). • After equating, all the probes shared the same mean (83.7) and SD (31.4). • The re-analysis of the parallelism indicated better fit (CFI increased to .94). Criteria for an excellent model fit were CFI ≥.95 and RMSEA ≤.06, while an acceptable model fit was defined as CFI ≥ .90 and RMSEA ≤ .08 (Hu & Bentler, 1999).

  9. Results (4) • The χ2 difference test result indicated a non-significant finding at the α = 0.05 level: • Δχ2 (38) = 52.81, p = .06, suggesting there was no significant difference between the two models fit to the rescaled data.

  10. Visual Analysis (A Poor Reader)

  11. Visual Analysis (An AVG Reader)

  12. Conclusions • The equivalence assumption was somewhat supported, depending on how strict the cutoff criteria were set (excellent or acceptable). • No finding of the significant reduction of fit from the congeneric to the parallel model on the row data found in Bett et al. (2009) with first-grade CBM-R materials. • The linear equating procedure appeared to have made a contribution towards making the set of passages equivalent even though it was not originally designed to reduce variability in scores.

More Related