html5-img
1 / 23

On-line resources

On-line resources. http://wise.cgu.edu/powermod/index.asp http://wise.cgu.edu/regression_applet.asp http://wise.cgu.edu/hypomod/appinstruct.asp http://psych.hanover.edu/JavaTest/NeuroAnim/stats/StatDec.html http://psych.hanover.edu/JavaTest/NeuroAnim/stats/t.html

gamada
Download Presentation

On-line resources

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On-line resources • http://wise.cgu.edu/powermod/index.asp • http://wise.cgu.edu/regression_applet.asp • http://wise.cgu.edu/hypomod/appinstruct.asp • http://psych.hanover.edu/JavaTest/NeuroAnim/stats/StatDec.html • http://psych.hanover.edu/JavaTest/NeuroAnim/stats/t.html • http://psych.hanover.edu/JavaTest/NeuroAnim/stats/CLT.html • Note demo page

  2. Effect sizes For a small effect size, .01, The change in success rate is from 46% to 54% For a medium effect size, .06, The change in success rate is from 38% to 62%. For a large effect size, .16, The change in success rate is from 30% to 70%

  3. But what does .10 really mean?

  4. Is psychotherapy effective?(after Shapiro & Shapiro, 1983)

  5. Calculating Cohen’s D Effect size = difference between predicted mean and mean of known population divided by population standard deviation (assumes that you know population and sample size) (imagine one population receives treatment, the other does not) d= (m1-m2) / s m1=mean of population 1 (hypothesized mean for the population that is subjected to the experimental manipulation) m2=mean of population 2 (which is also the mean of the comparison distribution) s=standard deviation of population 2 (assumed to be the standard deviation of both populations

  6. One other way to think about D • D =.20, overlap 85%, 15 vs. 16 year old girls distribution of heights • D=.50, overlap 67%, 14 vs. 18 year old girls distribution of heights • D=.80, overlap 53%, 13 vs. 18 years old girls distribution of heights

  7. Effect sizes are interchangeable

  8. http://www.amstat.org/publications/jse/v10n3/aberson/power_applet.htmlhttp://www.amstat.org/publications/jse/v10n3/aberson/power_applet.html

  9. Statistical significance vs. effect size • p <.05 • r =.10 • For 100,000, p<.05 • For 10, p>.05 • Large sample, closer to population, less chance of sampling error

  10. Brief digression • Research hypotheses and statistical hypotheses • Is psychoanalysis effective? • Null? • Alternate? • Handout • Why test the null?

  11. Statistical significance and decision levels. (Z scores, t values and F values)  Sampling distributions for the null hypothesis: http://statsdirect.com/help/distributions/pf.htm

  12. One way to think about it…

  13. Two ways to guess wrong Type 1 error: think something is there and there is nothing Type 2 error: think nothing is there and there is

  14. An example

  15. An example  Imagine the following research looking at the effects of the drug, AZT, if any, on HIV positive patients. In others words, does a group of AIDs patients given AZT live longer than another group given a placebo. If we conduct the experiment correctly - everything is held constant (or randomly distributed) except for the independent measure and we do find a different between the two groups, there are only two reasonable explanations available to us: From Dave Schultz:

  16. Statistical power is how “sensitive” a study is detecting various associations (magnification metaphor) If you think that the effect is small (.01), medium, (.06) or large (.15), and you want to find a statistically significant difference defined as p<.05, this table shows you how many participants you need for different levels of “sensitivity” or power.

  17. If you think that the effect is small (.01), medium, (.06) or large (.15), and you want to find a statistically significant difference defined as p<.01, this table shows you how many participants you need for different levels of “sensitivity” or power.

  18. What determines power? • Number of subjects • Effect size • Alpha level Power = probability that your experiment will reveal whether your research hypothesis is true

  19. How increase power? • Increase region of rejection to p<.10 • Increase sample size • Increase treatment effects • Decrease within group variability

  20. What is adequate power? .50 (most current research) .80 (recommended) How do you know how much power you have? Guess work Two ways to use power: 1. Post hoc to establish what you could find 2. Determine how many participants need

  21. Statistical power (for p <.05) r=.10 r=.30 r=.50 Two tailed One tailed Power: Power = 1 - type 2 error Power = 1 - beta

More Related