1 / 38

Calculations of Reliability

Calculations of Reliability. We are interested in calculating the ICC First step: Conduct a single-factor, within-subjects (repeated measures) ANOVA This is an inferential test for systematic error All subsequent equations are derived from the ANOVA table. Repeated Measures ANOVA.

monty
Download Presentation

Calculations of Reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Calculations of Reliability • We are interested in calculating the ICC • First step: • Conduct a single-factor, within-subjects (repeated measures) ANOVA • This is an inferential test for systematic error • All subsequent equations are derived from the ANOVA table

  2. Repeated Measures ANOVA • Steps for calculation: • Arrange the raw data (X) into tabular form, placing the data for subjects in rows (R), and repeated measures in columns (C).

  3. Repeated Measures ANOVA • Steps for calculation: • Square each value = (Trial A1)2 • Calculate the row totals (ΣR) using the original scores • Calculate the column totals (ΣC) using the original scores • Calculate the grand total (ΣXT) or (ΣΣR) – same thing.

  4. Repeated Measures ANOVA • Steps for calculation: • Sum the row totals = ΣΣR • This is also the “total sum” of all original scores. • Also = ΣXT • Square each row total = (ΣR)2 • Sum the squares of the row totals = Σ(ΣR)2

  5. Repeated Measures ANOVA • Steps for calculation: • Compute the mean values for each column.

  6. Repeated Measures ANOVA • Steps for calculation: • Sum the squares of columns = Σ(Trial A1)2 • Sum the sum of squared columns = Σ(Σ(Trial A1)2) • This is also referred to as ΣX2. • Σ(ΣR)2 was calculated in step 8. • N = the number of subjects. • k = the number of trials.

  7. Repeated Measures ANOVA • Steps for calculation: • Compute the sum of squares between columns (SSC), which is the variability due to the repeated-measures treatment effect. • In this case, SSC is “systematic variability.”

  8. Repeated Measures ANOVA • Steps for calculation: • Compute the sum of squares between rows (SSR), which is the variability due to differences among subjects.

  9. Repeated Measures ANOVA • Steps for calculation: • Calculate the total sum of squares (SST), which is the variability due to subjects (rows), treatment (columns), and unexplained residual variability (error).

  10. Repeated Measures ANOVA • Steps for calculation: • Calculate the total sum of squares due to error (SSE), which is the unexplained variability due to error. This will be used in the denominator for the F ratio.

  11. Repeated Measures ANOVA • Steps for calculation: • Calculate the degrees of freedom for each source of variance (dfC, dfR, dfE, and dfT).

  12. Repeated Measures ANOVA Between Subjects = rows Trials = columns Within Subjects = Trials + Error • Steps for calculation: • Construct an ANOVA table: dfC dfR dfE dfT SSC SSR SSE SST

  13. Repeated Measures ANOVA • Steps for calculation: • Calculate the mean square for each source of variance (MSC, MSR, and MSE).

  14. Repeated Measures ANOVA • Steps for calculation: • Calculate the F ratio for the treatment effect (columns, FC).

  15. Repeated Measures ANOVA • Determining the Significance of F: • Use the F Distribution Critical Values table. • dfC = dfB – columns across the top • dfE = dfE – rows down the side • If your calculated F ratio is greater than the critical F ratio, then reject the null hypothesis. • There is a significant difference from Trial A1 to Trial A2 • There is a significant systematic error • If your calculated F ratio is less than the critical F ratio, then accept the null hypothesis. • There is no difference from Trial A1 to Trial A2 • There is no systematic error

  16. Using ANOVA Table for ICC • 2 sources of variability for ICC model 3,1 • Subjects (MSS) • Between-subjects variability (for calculating the ICC) • Error (MSE) • Random error (for calculating the ICC) Equation reported by Weir (2005) Same equation, but modified for our terminology (MSS = MSR).

  17. MSR or MSS MSE

  18. Using ANOVA Table for ICC • Calculating the ICC3,1:

  19. Interpreting the ICC • If ICC = 0.95 • 95% of the observed score variance is due to true score variance • 5% of the observed score variance is due to error • 2 factors for examining the magnitude of the ICC • Which version of the ICC was used? • Magnitude of the ICC depends on the between-subjects variability in the data • Because of the relationship between the ICC magnitude and between-subjects variability, standard error of measurement values (SEM) should be included with the ICC

  20. Implications of a Low ICC • Low reliability • Real differences • Argument to include SEM values • Type I vs. Type II error • Type I error is rejecting H0 when there was no effect (i.e., H0 = 0) • Type II error is failing to reject the H0 when there is an effect (i.e., H0≠ 0) • A low ICC means that more subjects will be necessary to overcome the increased percentage of the observed score variance due to error.

  21. Standard Error of Measurement • ICC  relative measure of reliability • No units • SEM  absolute index of reliability • Same units as the measurement of interest • The SEM is the standard error in estimating observed scores from true scores.

  22. Calculating the SEM • Calculating the SEM3,1:

  23. SEM • We can report SEM values in addition to the ICC values and the results of the ANOVA • We can calculate the minimum difference (MD) that can be considered “real” between scores

  24. Minimum Difference • The SEM can be used to determine the minimum difference (MD) to be considered “real” and can be calculated as follows:

  25. Example Problem • Now use your skills (by hand) to calculate a repeated measures ANOVA, ICC3,1, SEM3,1, and MD3,1 for Trials B1 and B2. • Report your results. • Compare your results to Trials A1 and A2. • What is the primary difference?

  26. Using the Reliability Worksheet Online • Go to the course website and download the Reliability.xls worksheet. • Calculate the ANOVA, ICC, SEM, and MD values for both Trials A1 and A2 and Trials B1 and B2 and compare your results.

More Related