1 / 28

COMPUTING EFFECT SIZES

COMPUTING EFFECT SIZES. LECTURE 4 EPSY 652 FALL 2009. Computing Effect Sizes- Mean Difference Effects. Glass: e = (Mean Experimental – Mean Control )/SD SD = Square Root (average of two variances) for randomized designs

nelson
Download Presentation

COMPUTING EFFECT SIZES

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COMPUTING EFFECT SIZES LECTURE 4 EPSY 652 FALL 2009

  2. Computing Effect Sizes- Mean Difference Effects • Glass: e = (MeanExperimental – MeanControl)/SD • SD = Square Root (average of two variances) for randomized designs • SD = Control standard deviation when treatment might affect variation (causes statistical problems in estimation) • Hedges: Correct for sampling bias:g = e[ 1 – 3/(4N – 9) ] • where N=total # in experimental and control groups • Sg = [ (Ne + Nc)/NgNc + g2/(2(Ne + Nc) ]½

  3. Computing Effect Sizes- Mean Difference Effects Example from Spencer ADHD Adult study • Glass: e = (MeanExperimental – MeanControl)/SD = (82 – 101)/21.55 = .8817 • Hedges: Correct for sampling bias:g = e[ 1 – 3/(4N – 9) ] = .8817 (1 – 3/(4*110 – 9) = .8762 Note: SD computed from t-statistic of 4.2 given in article: e = t*(1/NE + 1/NC )½

  4. Computing Mean Difference Effect Sizes from Summary Statistics • t-statistic: e = t*(1/NE + 1/NC )½ • F(1,dferror): e = F½ *(1/NE + 1/NC )½ • Point-biserial correlation: e = r*(dfe/(1-r2 ))½ *(1/NE + 1/NC )½ • Chi Square (Pearson association):  = 2/(2 + N) e = ½*(N/(1-))½ *(1/NE + 1/NC )½ • ANOVA results: Compute R2 = SSTreatment/Sstotal Treat R as a point biserial correlation

  5. Excel workbook for Mean difference computation

  6. WORKING AN EXAMPLE Story Book Reading References 1 Wasik & Bond: Beyond the Pages of a Book: Interactive Book Reading and Language Development in Preschool Classrooms. J. Ed Psych 2001 2 Justice & Ezell. Use of Storybook Reading to Increase Print Awareness in At-Risk Children. Am J Speech-Language Path 2002 3 Coyne, Simmons, Kame’enui, & Stoolmiller. Teaching Vocabulary During Shared Storybook Readings: An Examination of Differential Effects. Exceptionality 2004 4 Fielding-Barnsley & Purdie. Early Intervention in the Home for Children at Risk of Reading Failure. Support for Learning 2003

  7. Coding the Outcome 1 open Wasik & Bond pdf 2 open excel file “computing mean effects example” 3 in Wasik find Ne and Nc 4 decide on effect(s) to be used- three outcomes are reported: PPVT, receptive, and expressive vocabulary at classroom and student level: what is the unit to be focused on? Multilevel issue of student in classroom, too few classrooms for reasonable MLM estimation, classroom level is too small for good power- use student data

  8. Coding the Outcome 5 Determine which reported data is usable: here the AM and PM data are not usable because we don’t have the breakdowns by teacher-classroom- only summary tests can be used 6 Data for PPVT were analyzed as a pre-post treatment design, approximating a covariance analysis; thus, the interaction is the only usable summary statistic, since it is the differential effect of treatment vs. control adjusting for pretest differences with a regression weight of 1 (ANCOVA with a restricted covariance weight): Interactionij= Grand Mean – Treat effect –pretest effect = Y… - ai.. – b.j. Graphically, the Difference of Gain inTreat(post-pre) and Gain in Control (post –pre) • F for the interaction was F(l,120) = 13.69, p < .001. • Convert this to an effect size using excel file Outcomes Computation • What do you get? (.6527)

  9. Coding the Outcome Y Gain not “predicted” from control post gains pre Control Treatment

  10. Coding the Outcome 7 For Expressive and Receptive Vocabulary, only the F-tests for Treatment-Control posttest results are given: Receptive: F(l, 120) = 76.61, p < .001 Expressive: F(l, 120) =128.43, p< .001 What are the effect sizes? Use Outcomes Computation 1.544 1.999

  11. Getting a Study Effect • Should we average the outcomes to get a single study effect or • Keep the effects separate as different constructs to evaluate later (Expressive, Receptive) or • Average the PPVT and receptive outcome as a total receptive vocabulary effect? Comment- since each effect is based on the same sample size, the effects here can simply be averaged. If missing data had been involved, then we would need to use the weighted effect size equation, weighting the effects by their respective sample size within the study

  12. Getting a Study Effect • For this example, let’s average the three effects to put into the Computing mean effects example excel file- note that since we do not have means and SDs, we can put MeanC=0, and MeanE as the effect size we calculated, put in the SDs as 1, and put in the correct sample sizes to get the Hedges g, etc. • (.6567 + 1.553 + 2.01)/3 = 1.4036

  13. 2 Justice & Ezell • Receptive: 0.403 • Expressive: 0.8606 • Average = 0.6303 • 4 Fielding • PPVT: -0.0764 3 Coyne et al • Taught Vocab: 0.9385 • Untaught Vocab: 0.3262 • Average = 0.6323

  14. Computing mean effect size • Use e:\\Computing mean effects1.xls Mean

  15. Computing Correlation Effect Sizes • Reported Pearson correlation- use that • Regression b-weight: use t-statistic reported, e = t*(1/NE + 1/NC )½ • t-statistics: r = [ t2 / (t2 + dferror) ] ½ Sums of Squares from ANOVA or ANCOVA: r = (R2partial) ½ R2partial = SSTreatment/Sstotal Note: Partial ANOVA or ANCOVA results should be noted as such and compared with unadjusted effects

  16. Computing Correlation Effect Sizes • To compute correlation-based effects, you can use the excel program “Outcomes Computation correlations” • The next slide gives an example. • Emphasis is on disaggregating effects of unreliability and sample-based attenuation, and correcting sample-specific bias in correlation estimation • For more information, see Hunter and Schmidt (2004): Methods of Meta-Analysis. Sage. • Correlational meta-analyses have focused more on validity issues for particular tests vs. treatment or status effects using means

  17. Computing Correlation Effects Example

  18. EFFECT SIZE DISTRIBUTION • Hypothesis: All effects come from the same distribution • What does this look like for studies with different sample sizes? • Funnel plot- originally used to detect bias, can show what the confidence interval around a given mean effect size looks like • Note: it is NOT smooth, since CI depends on both sample sizes AND the effect size magnitude

  19. EFFECT SIZE DISTRIBUTION • Each mean effect SE can be computed from SE = 1/ (w) For our 4 effects: 1: 0.200525 2: 0.373633 3: 0.256502 4: 0.286355 These are used to construct a 95% confidence interval around each effect

  20. EFFECT SIZE DISTRIBUTION- SE of Overall Mean • Overall mean effect SE can be computed from SE = 1/ (w) For our effect mean of 0.8054, SE = 0.1297 Thus, a 95% CI is approximately (.54, 1.07) The funnel plot can be constructed by constructing a SE for each sample size pair around the overall mean- this is how the figure below was constructed in SPSS, along with each article effect mean and its CI

  21. EFFECT SIZE DISTRIBUTION- Statistical test • Hypothesis: All effects come from the same distribution: Q-test • Q is a chi-square statistic based on the variation of the effects around the mean effect Q =  wi ( g – gmean)2 Q 2 (k-1) k

  22. Example Computing Q Excel file

  23. Computational Excel file • Open excel file: Computing Q • Enter the effects for the 4 studies, w for each study (you can delete the extra lines or add new ones by inserting as needed) • from the Computing mean effect excel file • What Q do you get? Q = 39.57 df=3 p<.001

  24. Interpreting Q • Nonsignificant Q means all effects could have come from the same distribution with a common mean • Significant Q means one or more effects or a linear combination of effects came from two different (or more) distributions • Effect component Q-statistic gives evidence for variation from the mean hypothesized effect

  25. Interpreting Q- nonsignificant • Some theorists state you should stop- incorrect. • Homogeneity of overall distribution does not imply homogeneity with respect to hypotheses regarding mediators or moderators • Example- homogeneous means correlate perfectly with year of publication (ie. r= 1.0, p< .001)

  26. Interpreting Q- significant • Significance means there may be relationships with hypothesized mediators or moderators • Funnel plot and effect Q-statistics can give evidence for nonconforming effects that may or may not have characteristics you selected and coded for

More Related