1 / 19

Intro to Statistics for the Behavioral Sciences PSYC 1900

Intro to Statistics for the Behavioral Sciences PSYC 1900. Lecture 13: ANOVA and Multiple Comparisons. Analysis of Variance (ANOVA). Use is similar to t-tests, but allows comparisons of more than 2 means. Logic? With more than 2 means, one difference cannot capture predictions.

quito
Download Presentation

Intro to Statistics for the Behavioral Sciences PSYC 1900

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intro to Statistics for the Behavioral SciencesPSYC 1900 Lecture 13: ANOVA and Multiple Comparisons

  2. Analysis of Variance (ANOVA) • Use is similar to t-tests, but allows comparisons of more than 2 means. • Logic? • With more than 2 means, one difference cannot capture predictions. • Variance of means is similar in concept to difference between 2 means. • If manipulations produce mean differences, than variance of means should increase. • Difference between two means is very similar to a variance measure. • t-tests are simplified case of ANOVA

  3. Logic of One-way ANOVA • We calculate 2 estimates of the population variance. • MSwithin or MSerror is calculated as the mean variance of the group variances. • Does not depend on differences in the means. • MSgroup is calculated using the standard error of the group means. • This will equal the population variance only if the groups were drawn from the same population. • If the null is not true, MSgroup will be significantly greater than MSerror • “Significance” is determined using the F test.

  4. Calculation Formulae for ANOVA • Notation: • Individual scores for person i in group j: • Means for group j: • Grand mean (mean of all scores):

  5. Calculation Formulae for ANOVA • SStotal is the sum of squared deviations of all observations from the grand mean. • SSgroup is the sum of squared deviations of the group means from the grand mean. • SSerror is the sum of squared deviations within each group.

  6. Calculation Formulae for ANOVA • Now we convert SS into Mean Squares (MS). • We divide SS by associated df’s. • dftotal=N-1 • dfgroup= #groups-1 • dferror=(#groups)(n-1) • F is the ratio of these two estimates of population variance. • Critical values of F are found using F(dfgroup, dferror),

  7. Example Using Moral Judgment • Is moral judgment objective, or sensitive to actors? • Participants act in an unfair way, or observe another act in an unfair way, and judge fairness of actions. • 4 conditions • Self, other, ingroup other, outgroup other • Prediction is that leniency will emerge for self and ingroup other.

  8. Example Using Moral Judgment • Null hypothesis is that means of each population are equal (i.e., same population). • Population refers to scores emerging in a given condition. • The null can be invalidated in a number of ways.

  9. What does this tell us? At least one of the means differs from at least one of the others? But this does not allow us to test our exact hypothesis.

  10. Multiple Comparisons • Multiple comparisons are a family of techniques for making comparisons between means subsequent to an ANOVA. • Before choosing which type of comparisons, we must decide what level of Type I errors we are willing to accept.

  11. Multiple Comparisons & Familywise Error • Each time we make an individual comparison with an alpha of .05, the chance of making a type I error is .05. • When we make several comparisons, the error rates are cumulative. • For example, the probability for making a type I error in a set of 5 comparisons is approximately .13.

  12. Fisher’s Least Significant Difference (LSD) Test • One of the more liberal and common techniques is the LSD. • The LSD (or protected t) is conceptually similar to conducting t-tests on pairs of means. • It is called protected because the ANOVA tell us that at least one difference exists while controlling alpha at .05. • Does not completely keep familywise error at .05, but is acceptable compromise for small numbers of comparisons.

  13. Fisher’s Least Significant Difference (LSD) Test • We use the basic t-test formula, but we replace the pooled variance error estimate with MSerror. • Makes sense as the MSerror should be a more accurate estimate of the population variance. • Test using t distribution with dferror degrees of freedom.

  14. Bonferroni Procedure • Multiple comparison procedure in which familywise error rate is divided by the number of comparisons. • Keeps error rate at .05; very conservative. • You can then compute t-tests or LSD’s on comparisons, but only interpret differences as significant if the p-value is below the bonferroni critical value.

  15. Tukey Test • Another common procedure is the Tukey test. • Compares every mean to every other mean while maintaining familywise error rate at alpha. • Only really useful for post hoc or exploratory analyses due to conservative nature.

  16. Effect Size • Instead of difference between two means, we look at ratio of explained variance. • Eta squared is simply this ratio: • 18% of the variability in judgments is explained by the identity of the actor.

  17. Effect Size • Eta squared is conceptually simple, but biased. • It tends to overestimate effect size. • Omega squared is much less biased while measuring the same conceptual quantity: • 15% of the variability in judgments is explained by the identity of the actor.

More Related