# C2 Training: May 9 – 10, 2011 - PowerPoint PPT Presentation Download Presentation C2 Training: May 9 – 10, 2011

C2 Training: May 9 – 10, 2011 Download Presentation ## C2 Training: May 9 – 10, 2011

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
##### Presentation Transcript

1. C2 Training: May 9 – 10, 2011 Data Analysis and Interpretation: Computing effect sizes

2. A brief introduction to effect sizes Meta-analysis expresses the results of each study using a quantitative index of effect size (ES). ESs are measures of the strength or magnitude of a relationship of interest. ESs have the advantage of being comparable (i.e., they estimate the same thing) across all of the studies and therefore can be summarized across studies in the meta-analysis. Also, they are relatively independent of sample size.

3. Effect Size Basics • Effect sizes can be expressed in many different metrics • d, r, odds ratio, risk ratio, etc. • So be sure to be specific about the metric! • Effect sizes can be unstandardized or standardized • Unstandardized = expressed in measurement units • Standardized = expressed in standardized measurement units

4. Unstandardized Effect Sizes • Examples • 5 point gain in IQ scores • 22% reduction in repeat offending • €600 savings per person • Unstandardized effect sizes are helpful in communicating intervention impacts • But in many systematic reviews are not usable since not all studies will operationalize the dependent variable in the same way

5. Standardized Effect Sizes • Some standardized effect sizes are relatively easy to interpret • Correlation coefficient • Risk ratio • Others are not • Standardized mean difference (d) • Odds ratio, logged odds ratio

6. Types of effect size Most reviews use effect sizes from one of three families of effect sizes: • the d family, including the standardized mean difference, • the r family, including the correlation coefficient, and • the odds ratio (OR) family, including proportions and other measures for categorical data.

7. Effect size computation • Compute a measure of the “effect” of each study as our outcome • Range of effect sizes: • Differences between two groups on a continuous measure • Relationship between two continuous measures • Differences between two groups on frequency or incidence

8. Types of effect sizes • Standardized mean difference • Correlation Coefficient • Odds Ratios

9. Standardized mean difference • Used when we are interested in two-group comparisons using means • Groups could be two experimental groups, or in an observational study, two groups of interest such as boys versus girls.

10. Notation for study-level statistics n is sample size

11. Notation for study-level statistics

12. Standardized mean difference Pooled sample standard deviation

13. Pooled sample standard deviation

14. Correction to ESsm

15. Standard error of standardized mean difference

16. Example • Table 1 from: Henggeler, S. W., Melton, G. B. & Smith, L. A. (1992). Family preservation sing multisystemic therapy: An effective alternative to incarcerating seriuos juvenile offenders. Journal of Consulting and Clinical Psychology, 60(6), 953-961.

17. Note: Text of paper (p. 954) indicates that MST n = 43, usual services n = 41.

18. Computing pooled sd

19. Computing ESsm

20. Computing unbiased ESsm

21. Computing SEsm

22. 95% Confidence interval for ES’sm The 95% confidence interval for the standardized mean difference in weeks of incarceration ranges from -1 sds to -0.2 sds. Given that the sd of weeks is 16.6, the juveniles in MST were incarcerated on average -1.06*16.6 = -17.6 to -0.18*16.6 = -3 less weeks than juveniles in the standard treatment. In weeks, the confidence interval is [-17.6, -3.0].

23. Note: Text of paper (p. 954) indicates that MST n = 43, usual services n = 41.

24. Practice computations • Compute effect size for number of arrests • Compute effect size with bias correction • Compute 95% confidence interval for effect size • Interpret the effect size

25. Pooled sd for arrests

26. ESsm for arrests

27. Computing unbiased ESsm

28. Computing SEsm

29. 95% Confidence interval for ES’sm The 95% confidence interval for the standardized mean difference in number of arrests is from -0.87 sds to -0.01 sds. Given that the sd of arrests is 1.44, the juveniles in MST were arrested on average -0.87*1.44 = -1.25 to -0.01*1.44 = -0.01 less than juveniles in the standard treatment. In arrests, the confidence interval is [-1.25, -0.01].

30. Computing standardized mean differences The first steps in computing d effect sizes involve assessing what data are available and what’s missing. You will look for: • Sample size and unit information • Means and SDs or SEs for treatment and control groups • ANOVA tables • F or t tests in text, or • Tables of counts

31. Sample sizes Regardless of exactly what you compute you will need to get sample sizes (to correct for bias and compute variances). Sample sizes can vary within studies so check initial reports of n against • n for each test or outcome or • dfassociated with each test

32. Standardized Mean Differences • Means, standard deviations and sample sizes the most direct method • Without individual group sample sizes (n1 and n2), assume equal group n’s • Can compute standardized mean differences from t-statistic and from one-way F-statistic

33. ESsm from t-tests

34. Standardized mean difference from t-test

35. Standardized mean difference from means and sds

36. ESsm from F-tests (one-way) Note that you have to decide the direction of the effect given the results.

37. Standardized mean difference from F-test Note that we choose a negative effect size since the number of arrests is less for the MST group than for the control group

38. From means and sds from before

39. Correlational data

40. Correlation data

41. Standard error of z-transform

42. Example

43. Standard error of z-transform

44. 95% confidence interval for z

45. To translate back to r-metric

46. Confidence interval in r-metric