1 / 39

Statistical Analysis of the Nonequivalent Groups Design

Statistical Analysis of the Nonequivalent Groups Design. Analysis Requirements. N O X O N O O. Pre-post Two-group Treatment-control (dummy-code). Analysis of Covariance. y i =  0 +  1 X i +  2 Z i + e i. y i = outcome score for the i th unit

glyn
Download Presentation

Statistical Analysis of the Nonequivalent Groups Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical Analysis of the Nonequivalent Groups Design

  2. Analysis Requirements N O X O N O O • Pre-post • Two-group • Treatment-control (dummy-code)

  3. Analysis of Covariance yi = 0 + 1Xi + 2Zi + ei yi = outcome score for the ith unit 0 = coefficient for the intercept 1 = pretest coefficient 2 = mean difference for treatment Xi = covariate Zi = dummy variable for treatment(0 = control, 1= treatment) ei = residual for the ith unit where:

  4. 9 0 8 0 7 0 t s e t 6 0 t s o p 5 0 4 0 3 0 3 0 4 0 5 0 6 0 7 0 8 0 p r e t e s t The Bivariate Distribution Program group scores 15-points higher on Posttest. Program group has a 5-point pretest Advantage.

  5. Regression Results yi = 18.7 + .626Xi + 11.3Zi • Result is biased! • CI.95(2=10) = 2±2SE(2) = 11.2818±2(.5682) = 11.2818±1.1364 • CI = 10.1454 to 12.4182 Predictor Coef StErr t p Constant 18.714 1.969 9.50 0.000 pretest 0.62600 0.03864 16.20 0.000 Group 11.2818 0.5682 19.85 0.000

  6. 9 0 8 0 7 0 t s e t 6 0 t s o p 5 0 4 0 3 0 3 0 4 0 5 0 6 0 7 0 8 0 p r e t e s t The Bivariate Distribution Regression line slopes are biased. Why?

  7. Y X Regression and Error No measurement error

  8. Y X Y X Regression and Error No measurement error Measurement error on the posttest only

  9. Y X Y X Y X Regression and Error No measurement error Measurement error on the posttest only Measurement error on the pretest only

  10. How Regression Fits Lines

  11. How Regression Fits Lines Method of least squares

  12. How Regression Fits Lines Method of least squares Minimize the sum of the squares of the residuals from the regression line.

  13. Y X How Regression Fits Lines Method of least squares Minimize the sum of the squares of the residuals from the regression line. Least squares minimizes on y not x.

  14. Y X How Error Affects Slope No measurement error, No effect

  15. Y X Y X How Error Affects Slope No measurement error, no effect. Measurement error on the posttest only, adds variability around regression line, but doesn’t affect the slope

  16. Y X Y X Y X How Error Affects Slope No measurement error, no effect. Measurement error on the posttest only, adds variability around regression line, but doesn’t affect the slope. Measurement error on the pretest only: Affects slope Flattens regression lines

  17. Y X Y X Y Y X X How Error Affects Slope Measurement error on the pretest only: Affects slope Flattens regression lines

  18. Y X Y X Y Y X X How Error Affects Slope Notice that the true result in all three cases should be a null (no effect) one.

  19. Y X How Error Affects Slope Notice that the true result in all three cases should be a null (no effect) one. Null case

  20. Y X How Error Affects Slope But with measurement error on the pretest, we get a pseudo-effect. Pseudo-effect

  21. Where Does This Leave Us? • Traditional ANCOVA looks like it should work on NEGD, but it’s biased. • The bias results from the effect of pretest measurement errorunder the least squares criterion. • Slopes are flattened or “attenuated”.

  22. What’s the Answer? • If it’s a pretest problem, let’s fix the pretest. • If we could remove the errorfrom the pretest, it would fix the problem. • Can we adjust pretest scoresfor error? • What do we know about error?

  23. What’s the Answer? • We know that if we had no error, reliability = 1; all error, reliability=0. • Reliability estimates the proportion of true score. • Unreliability=1-Reliability. • This is the proportion of error! • Use this to adjust pretest.

  24. What Would a Pretest Adjustment Look Like? Original pretest distribution

  25. What Would a Pretest Adjustment Look Like? Original pretest distribution Adjusted dretest distribution

  26. Y X How Would It Affect Regression? The regression The pretest distribution

  27. Y X How Would It Affect Regression? The regression The pretest distribution

  28. Y X How Far Do We Squeeze the Pretest? • Squeeze inward an amount proportionate to the error. • If reliability=.8, we want to squeeze in about 20% (i.e., 1-.8). • Or, we want pretest to retain 80%of it’s original width.

  29. Adjusting the Pretest for Unreliability _ _ Xadj = X + r(X - X)

  30. Adjusting the Pretest for Unreliability _ _ Xadj = X + r(X - X) where:

  31. Adjusting the Pretest for Unreliability _ _ Xadj = X + r(X - X) where: Xadj = adjusted pretest value

  32. Adjusting the Pretest for Unreliability _ _ Xadj = X + r(X - X) where: Xadj = adjusted pretest value _ X = original pretest value

  33. Adjusting the Pretest for Unreliability _ _ Xadj = X + r(X - X) where: Xadj = adjusted pretest value _ X = original pretest value r = reliability

  34. Reliability-CorrectedAnalysis of Covariance yi = 0 + 1Xadj + 2Zi + ei yi = outcome score for the ith unit 0 = coefficient for the intercept 1 = pretest coefficient 2 = mean difference for treatment Xadj = covariate adjusted for unreliability Zi = dummy variable for treatment(0 = control, 1= treatment) ei = residual for the ith unit where:

  35. Regression Results yi = -3.14 + 1.06Xadj + 9.30Zi • Result is unbiased! • CI.95(2=10) = 2±2SE(2) = 9.3048±2(.6166) = 9.3048±1.2332 • CI = 8.0716 to 10.5380 Predictor Coef StErr t p Constant -3.141 3.300 -0.95 0.342 adjpre 1.06316 0.06557 16.21 0.000 Group 9.3048 0.6166 15.09 0.000

  36. pretest posttest pretest posttest MEAN MEAN STD DEV STD DEV Comp 49.991 50.008 6.985 7.549 Prog 54.513 64.121 7.037 7.381 ALL 52.252 57.064 7.360 10.272 Graph of Means

  37. pretest adjpre posttest pretest adjpre posttest MEAN MEAN MEAN STD DEV STD DEV STD DEV Comp 49.991 49.991 50.008 6.985 3.904 7.549 Prog 54.513 54.513 64.121 7.037 4.344 7.381 ALL 52.252 52.252 57.064 7.360 4.706 10.272 Adjusted Pretest • Note that the adjusted means are the same as the unadjusted means. • The only thing that changes is the standard deviation (variability).

  38. 9 0 8 0 7 0 t s e t 6 0 t s o p 5 0 4 0 3 0 3 0 4 0 5 0 6 0 7 0 8 0 p r e t e s t Original Regression Results Pseudo-effect=11.28 Original

  39. 9 0 8 0 7 0 t s e t 6 0 t s o p 5 0 4 0 3 0 3 0 4 0 5 0 6 0 7 0 8 0 p r e t e s t Corrected Regression Results Pseudo-effect=11.28 Original Effect=9.31 Corrected

More Related