1 / 30

Statistics (cont.)

Statistics (cont.). Psych 231: Research Methods in Psychology. Journal Summary 2 assignment Due today Group projects Plan to have your analyses done before Thanksgiving break, GAs will be available during lab times to help

elana
Download Presentation

Statistics (cont.)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistics (cont.) Psych 231: Research Methods in Psychology

  2. Journal Summary 2 assignment • Due today • Group projects • Plan to have your analyses done before Thanksgiving break, GAs will be available during lab times to help • Poster sessions are last lab sections of the semester (last week of classes), so start thinking about your posters. I will lecture about poster presentations on Monday. Announcements

  3. Step 2: Set your decision criteria • Step 3: Collect your data from your sample(s) • Step 4: Compute your test statistics • Step 5: Make a decision about your null hypothesis • “Reject H0” • “Fail to reject H0” • Step 1: State your hypotheses Testing Hypotheses

  4. About populations Real world (‘truth’) H0 is correct H0 is wrong Type I error Reject H0 Experimenter’s conclusions Fail to Reject H0 Type II error 76% 80% XB XA • Example Experiment: • Group A - gets treatment to improve memory • Group B - gets no treatment (control) • After treatment period test both groups for memory H0: μA = μB H0: there is no difference between Grp A and Grp B • Results: • Group A’s average memory score is 80% • Group B’s is 76% HA: there is a difference between Grp A and Grp B • Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Two sample distributions Summary to this point

  5. Real world (‘truth’) H0 is correct H0 is wrong Type I error Reject H0 Experimenter’s conclusions Fail to Reject H0 Type II error H0 is true (no treatment effect) 76% 80% XB XA • Tests the question: • Are there differences between groups due to a treatment? Two possibilities in the “real world” One population Two sample distributions “Generic” statistical test

  6. Real world (‘truth’) H0 is correct H0 is wrong Type I error Reject H0 Experimenter’s conclusions Fail to Reject H0 Type II error XB XA XA • Tests the question: • Are there differences between groups due to a treatment? Two possibilities in the “real world” H0 is false (is a treatment effect) H0 is true (no treatment effect) Two populations Two sample distributions XB 76% 80% 76% 80% People who get the treatment change, they form a new population (the “treatment population) “Generic” statistical test

  7. XA XB • ER: Random sampling error • ID: Individual differences (if between subjects factor) • TR: The effect of a treatment • Why might the samples be different? (What is the source of the variability between groups)? “Generic” statistical test

  8. TR + ID + ER ID + ER XA XB • The generic test statistic - is a ratio of sources of variability • ER: Random sampling error • ID: Individual differences (if between subjects factor) • TR: The effect of a treatment Observed difference Computed test statistic = = Difference from chance “Generic” statistical test

  9. Population mean Population Distribution Sampling error (Pop mean - sample mean) x n = 1 Difference from chance

  10. Population mean Population Distribution Sample mean Sampling error (Pop mean - sample mean) x x n = 2 Difference from chance

  11. Population mean Population Distribution Sample mean x x x x x x x x x x Sampling error (Pop mean - sample mean) • Generally, as the sample size increases, the sampling error decreases n = 10 Difference from chance

  12. Small population variability Large population variability • Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Difference from chance

  13. Population Distribution of sample means XB XC XD Avg. Sampling error “chance” XA • These two factors combine to impact the distribution of sample means. • The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population Samples of size = n Difference from chance

  14. Test statistic TR + ID + ER ID + ER • The generic test statistic distribution • To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) • What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Distribution of sample means α-level determines where these boundaries go “Generic” statistical test

  15. The generic test statistic distribution • To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) • What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Reject H0 Fail to reject H0 “Generic” statistical test

  16. The generic test statistic distribution • To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) • What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Reject H0 “One tailed test”: sometimes you know to expect a particular difference (e.g., “improve memory performance”) Fail to reject H0 “Generic” statistical test

  17. Things that affect the computed test statistic • Size of the treatment effect • The bigger the effect, the bigger the computed test statistic • Difference expected by chance (sample error) • Sample size • Variability in the population “Generic” statistical test

  18. “A statistically significant difference” means: • the researcher is concluding that there is a difference above and beyond chance • with the probability of making a type I error at 5% (assuming an alpha level = 0.05) • Note “statistical significance” is not the same thing as theoretical significance. • Only means that there is a statistical difference • Doesn’t mean that it is an important difference Significance

  19. Failing to reject the null hypothesis • Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) • Usually check to see if you made a Type II error (failed to detect a difference that is really there) • Check the statistical power of your test (1-β) • Sample size is too small • Effects that you’re looking for are really small • Check your controls, maybe too much variability Non-Significance

  20. 1 factor with two groups • T-tests • Between groups: 2-independent samples • Within groups: Repeated measures samples (matched, related) • 1 factor with more than two groups • Analysis of Variance (ANOVA) (either between groups or repeated measures) • Multi-factorial • Factorial ANOVA Some inferential statistical tests

  21. Observed difference X1 - X2 T = Diff by chance Based on sample error • Design • 2 separate experimental conditions • Degrees of freedom • Based on the size of the sample and the kind of t-test • Formula: Computation differs for between and within t-tests T-test

  22. Reporting your results • The observed difference between conditions • Kind of t-test • Computed T-statistic • Degrees of freedom for the test • The “p-value” of the test • “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.” • “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.” T-test

  23. XA XC XB • Designs • More than two groups • 1 Factor ANOVA, Factorial ANOVA • Both Within and Between Groups Factors • Test statistic is an F-ratio • Degrees of freedom • Several to keep track of • The number of them depends on the design Analysis of Variance

  24. Observed variance F-ratio = XA XC XB Variance from chance • More than two groups • Now we can’t just compute a simple difference score since there are more than one difference • So we use variance instead of simply the difference • Variance is essentially an average difference Analysis of Variance

  25. XA XC XB • 1 Factor, with more than two levels • Now we can’t just compute a simple difference score since there are more than one difference • A - B, B - C, & A - C 1 factor ANOVA

  26. The ANOVA tests this one!! Do further tests to pick between these XA = XB = XC XA ≠ XB ≠ XC XA ≠ XB = XC XA = XB ≠ XC XA = XC ≠ XB XA XC XB Null hypothesis: H0: all the groups are equal Alternative hypotheses • HA: not all the groups are equal 1 factor ANOVA

  27. XA ≠ XB ≠ XC XA ≠ XB = XC XA = XB ≠ XC XA = XC ≠ XB • Planned contrasts and post-hoc tests: • - Further tests used to rule out the different Alternative hypotheses Test 1: A ≠ B Test 2: A ≠ C Test 3: B = C 1 factor ANOVA

  28. Reporting your results • The observed differences • Kind of test • Computed F-ratio • Degrees of freedom for the test • The “p-value” of the test • Any post-hoc or planned comparison results • “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < 0.05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p <0.05). Groups B and C did not differ significantly from one another” 1 factor ANOVA

  29. We covered much of this in our experimental design lecture • More than one factor • Factors may be within or between • Overall design may be entirely within, entirely between, or mixed • Many F-ratios may be computed • An F-ratio is computed to test the main effect of each factor • An F-ratio is computed to test each of the potential interactions between the factors Factorial ANOVAs

  30. Reporting your results • The observed differences • Because there may be a lot of these, may present them in a table instead of directly in the text • Kind of design • e.g. “2 x 2 completely between factorial design” • Computed F-ratios • May see separate paragraphs for each factor, and for interactions • Degrees of freedom for the test • Each F-ratio will have its own set of df’s • The “p-value” of the test • May want to just say “all tests were tested with an alpha level of 0.05” • Any post-hoc or planned comparison results • Typically only the theoretically interesting comparisons are presented Factorial ANOVAs

More Related