200 likes | 1.07k Views
1. Significance and Meaningfulness. Effect Size & Statistical Power. 1. Effect Size. How “meaningful” is the significant difference?. 1. Significance vs. meaningfulness. As sample size increases, likelihood of significant difference increases. 2.
E N D
1 Significance and Meaningfulness Effect Size & Statistical Power
1. Effect Size How “meaningful” is the significant difference? 1
Significance vs. meaningfulness • As sample size increases, likelihood of significant difference increases 2 The fact that this sample size is buried down here in the denominator of the test statistic means that as n , p 0. So if your sample is big enough, it will generate significant results 1
Significance vs. meaningfulness • As sample size increases, likelihood of significant difference increases • So statistical difference does not always mean important difference • What to do about this? • Calculate a measure of the difference that is standardized to be expressed in terms of the variability in the 2 samples, but independent of sample size • = EFFECT SIZE 1
Effect Size • EFFECT SIZE - FORMULA 1 2
Effect Size • EFFECT SIZE – from SPSS • Using appendix B data set 2, and submitting DV salary to test of difference across gender, gives the following output (squashed here to fit): 1 T-Test
Effect Size 1 • EFFECT SIZE – from SPSS Mean difference to use T-Test SD’s to pool
Effect Size • EFFECT SIZE – from SPSS 1 so So… 2
Effect Size • EFFECT SIZE – from SPSS 1 Substituting… 2
Effect Size • EFFECT SIZE – from SPSS Calculating… 1
Effect Size • From Cohen, 1988: • d = .20 is small • d = .50 is moderate • d = .80 is large • So our effect size of .25 is small, and concurs on this occasion with the insignificant result • The finding is both insignificant and small • (a pathetic, measly, piddling little difference of no consequence whatsoever – trivial and beneath us) 1 2
1 2. Statistical Power 2 3 Maximizing the likelihood of significance 4
Statistical Power • The likelihood of getting a significant relationship when you should (i.e. when there is a relationship in reality) • Recall from truth table, power = 1 - 1
Factors Affecting Statistical Power The big ones: • Effect size (bit obvious) • Select samples such that difference between them is maximized • Combines the effects of sample SD (need to decrease) and mean difference (need to increase) • Sample size • Most commonly discussed: as n increases, SEM decreases, and test statistic then increases 1 2
Factors Affecting Statistical Power The others: • Level of significance • Smaller , less power • Larger , more power • 1-tailed vs. 2-tailed tests • With good a priori info (i.e. research literature), selecting 1-tailed test increases power • Dependent samples • Correlation between samples reduces standard error, and thus increases test statistic 1 2 3
Calculating sample size a priori • Specify effect size • Set desired level of power • Enter values for effect size and power in appropriate table, and generate desired sample size: • Applet for calculating sample size based on above: http://www.stat.uiowa.edu/~rlenth/Power/ • Applets for seeing power acting (and interacting) with sample size, effect size, etc… http://statman.stat.sc.edu/~west/applets/power.html http://acad.cgu.edu/wise/power/powerapplet1.html http://www.stat.sc.edu/%7Eogden/javahtml/power/power.html 1