1 / 44

Effect size calculation in educational and behavioral research

Effect size calculation in educational and behavioral research. Wim Van den Noortgate ‘Power training’ Faculty Psychology and Educational Sciences, K.U.Leuven Leuven, October 10 2003 Questions and comments: Wim.VandenNoortgate@ped.kuleuven.ac.be. Applications A measure for each situation

sarai
Download Presentation

Effect size calculation in educational and behavioral research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Effect size calculation in educational and behavioral research Wim Van den Noortgate ‘Power training’ Faculty Psychology and Educational Sciences, K.U.Leuven Leuven, October 10 2003 Questions and comments: Wim.VandenNoortgate@ped.kuleuven.ac.be

  2. Applications • A measure for each situation • Some specific topics

  3. Applications • Expressing size of association • Comparing size of association • Determining power

  4. M F Application 1: Expressing size of association Example: M = 8 ; F = 8.5 ; M = F = 1.5 => δ = 0.33

  5. Application 1: Expressing size of association Example: M = 8 ; F = 8.5 ; M = F = 1.5 => δ = 0.33

  6. Application 1: Expressing size of association Example: M = 8 ; F = 8.5 ; M = F = 1.5 => δ = 0.33

  7. δ g

  8. 0 0.33 g

  9. Suppose simulated data are data from 10 studies, being replications of each other:

  10. Comparing individual study results and combined study results • observed effect sizes may be negative, small, moderate and large. • CI relatively large • 0 often included in confidence intervals • Combined effect size close to population effect size • CI relatively small • 0 not included in confidence interval

  11. Meta-analysis: Gene Glass (Educational Researcher, 1976, p.3): “Meta-analysis refers to the analysis of analyses”

  12. Application 2: Comparing the size of association Example: Raudenbush & Bryk (2002)

  13. Results meta-analysis: • The variation between observed effect sizes is larger than could be expected based on sampling variance alone: the population effect size is probably not the same for studies. • The effect depends on the amount of previous contact

  14. Application 3: Power calculations Power = probability to reject H0 Power depends on - δ - α - N

  15. ‘Powerful’ questions: • Suppose the population effect size is small (δ = 0.20), how large should my sample size (N) be, to have a high probability (say, .80) to draw the conclusion that there is an effect (power), when testing with an α-level of .05? • I did not find an effect, but maybe the chance to find an effect (power) with such a small sample is small anyway? (N and α from study, assume for instance that δ=g)

  16. A measure for each situation

  17. A measure for each situation

  18. Dichotomous independent-dichotomous dependent variable

  19. Dichotomous independent-dichotomous dependent variable • Risk difference: .87-.60 = .27 • Relative risk: .87/.60 = 1.45 • Phi: (130 x 20 – 20 x 30)/sqrt (150 x 50 x 160 x 40) = 0.29 • Odds ratio: (130 x 20 / 20 x 30) = 4.33

  20. A measure for each situation

  21. Dichotomous independent-continuous dependent variable • Independent groups, homogeneous variance: • Independent groups, heterogeneous variance: • Repeated measures (one group): • Repeated measures (independent groups): • Nonparametric measures • rpb

  22. A measure for each situation

  23. Nominal independent-nominal dependent variable • Contingency measures, e.g.: • Pearson’s coefficient • Cramers V • Phi coefficient • Goodman-Kruskal tau • Uncertainty coefficient • Cohen’s Kappa

  24. A measure for each situation

  25. Nominal independent-continuous dependent variable • ANOVA: multiple g’s • η² • ICC

  26. A measure for each situation

  27. Continuous independent-Continuous dependent variable • r • Non-normal data: Spearman ρ • Ordinal data: Kendall’s τ, Somer’s D, Gamma coefficient • Weighted Kappa

  28. More complex situations • Two or more independent variables • Regression models

  29. Y continuous: Yi= a + bX + ei • X continuous: b estimated by • X dichotomous (1 = experimental, 0 = control), b estimated by • Y dichotomous: Logit(P(Y=1))= a + bX, If X dichotomous, b estimated by the log odds ratio

  30. More complex situations • Two or more independent variables • Regression models • Stratification • Contrast analyses in factorial designs (Rosenthal, Rosnow & Rubin,2000)

  31. Note: N=120 (12 x 10)

  32. More complex situations • Two or more independent variables • Regression models • Stratification • Contrast analyses in factorial designs • Multilevel models • Two or more dependent variables • Single-case studies

  33. Yi = b0 + b1 phasei + ei • Yi = b0 + b1 timei + b2 phasei +b3 (timei x phasei) + ei

  34. Specific topics

  35. Comparability of effect sizes Example: gIG vs. ggain:

  36. Comparability of effect sizes • Estimating different population parameters, e.g., • Estimating with different precision, e.g., g vs. Glass’s Δ

  37. Choosing a measure • Design and measurement level • Assumptions • Popularity • Simplicity of sampling distribution Fisher’s Z = 0.5 log[(1+r)/(1-r)] Log odds ratio Ln(RR) • Directional effect size

  38. Threats of effect sizes • ‘Bad data’ • Measurement error • Artificial dichotomization • Imperfect construct validity • Range restriction

  39. Threats of effect sizes • ‘Bad data’ • Measurement error • Artificial dichotomization • Imperfect construct validity • Range restriction • Bias

More Related