1 / 11

Ch 6: Making Sense of Statistical Significance: Decision Errors, Effect Size, and Power

Ch 6: Making Sense of Statistical Significance: Decision Errors, Effect Size, and Power. Pt 2: Sept. 26, 2013. Statistical Power. Probability that the study will produce a statistically significant result when the research hypothesis is in fact true

naasir
Download Presentation

Ch 6: Making Sense of Statistical Significance: Decision Errors, Effect Size, and Power

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ch 6: Making Sense of Statistical Significance: Decision Errors, Effect Size, and Power Pt 2: Sept. 26, 2013

  2. Statistical Power • Probability that the study will produce a statistically significant result when the research hypothesis is in fact true • That is, what is the power to correctly reject the null? • Upper right quadrant in decision table • Want to maximize our chances that our study has the power to find a true/real result • Can calculate power before the study using predictions of means • or after study using actual means

  3. Statistical Power • Steps for figuring power: 1. Gather the needed information: (N=16) * Mean & SD of comparison distribution (the distrib of means from Ch 5 – now known as Pop 2) * Predicted mean of experimental group (now known as Pop 1) * “Crashed” example: Pop 1 “crashed group” mean = 5.9 Pop 2 “neutral group/comparison pop” μ = 5.5,  = .8, m = sqrt (2)/N m = sqrt[(.8 2) / 16] = .2

  4. Statistical Power 2. Figure the raw-score cutoff point on the comparison distribution to reject the null hypothesis (using Pop 2 info) • For alpha = .05, 1-tailed test (remember we predicted the ‘crashed’ group would have higher fault ratings), z score cutoff = 1.64. • Convert z to a raw score (x) = z(m) + μ x = 1.64 (.2) + 5.5 = 5.83 • Draw the distribution and cutoff point at 5.83, shade area to right of cutoff point  “critical/rejection region”

  5. Statistical Power 3. Figure the Z score for this same point, but on the distribution of means for Population 1 (see ex on board) • That is, convert the raw score of 5.83 to a z score using info from pop 1. • Z = (x from step 2 -  from step 1exp group) m (from step 1) • (5.83 – 5.9) / .2 = -.35 • Draw another distribution & shade in everything to the right of -.35

  6. Statistical Power • Use the normal curve table to figure the probability of getting a score higher than Z score from Step 3 • Find % betw mean and z of -.35 (look up .35)… = 13.68% • Add another 50% because we’re interested in area to right of mean too. • 13.68 + 50 = 63.68%…that’s the power of the experiment.

  7. Power Interpretation • Our study (with N=16) has around 64% power to find a difference between the ‘crashed’ and ‘neutral’ groups if it truly exists. • Based on our estimate of what the ‘crashed’ mean will be (=5.9), so if this is incorrect, power will change. • In decision error table 1-power = beta (aka…type 2 error), so here: • Alpha? • Power? • Beta?

  8. Influences on Power • Main influences – effect size & N • 1) Effect size – bigger d more power • Remember formula: • Bigger difference between the 2 group means, more power to find the difference (that difference is the numerator of d) • Also, the smaller the population standard deviation, the bigger the effect size (sd is the denominator)

  9. (cont.) • Figuring power from predicted effect sizes • Sometimes, don’t know 1 for formula, can estimate effect size instead (use Cohen’s guidelines: .2, .5, .8 or -.2, -.5, -.8) Example:

  10. Practical Ways of Increasing the Power of a Planned Study • Rule of thumb: try for at least 80% power • Interpretation of 80% power – we have a .8 probability of finding an effect if one actually exists • See Table • 1) Try to increase effect size before the experiment (increase diffs betw 2 groups) • Training/no training group – how could you do this?

  11. 2) Try to decrease pop SD – use standardization so subjects in 1 group receive same instructions • 3) Increase N • 4) Use less stringent signif level (alpha) – but trade-off in reducing Type 1 error, so usually choose .05 or .01. • 5) Use a 1-tailed test when possible

More Related