1 / 77

ANOVA, Continued

ANOVA, Continued. PSY440 July 1, 2008. Quick Review of 1-way ANOVA. When do you use one-way ANOVA? What are the components of the F Ratio? How do you calculate the degrees of freedom for a one-way ANOVA? What are the null and alternative hypotheses in a typical one-way ANOVA?

honey
Download Presentation

ANOVA, Continued

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ANOVA, Continued PSY440 July 1, 2008

  2. Quick Review of 1-way ANOVA • When do you use one-way ANOVA? • What are the components of the F Ratio? • How do you calculate the degrees of freedom for a one-way ANOVA? • What are the null and alternative hypotheses in a typical one-way ANOVA? • What are the assumptions of ANOVA? • Why can’t you test a “one-tailed” hypothesis with ANOVA? • Questions before we move on?

  3. Group’s mean’s deviation from grand mean (M-GM) Score’s deviation from group mean (X-M) Score’s deviation from grand mean (X-GM) The structural model and ANOVA • The structural model is all about deviations Score (X) Group mean (M) Grand mean (GM)

  4. The ANOVA tests this one!! XA = XB = XC XA ≠ XB ≠ XC XA ≠ XB = XC XA = XB ≠ XC XA = XC ≠ XB XA XC XB 1 factor ANOVA Null hypothesis: H0: all the groups are equal Alternative hypotheses • HA: not all the groups are equal

  5. Planned Comparisons • Simple comparisons • Complex comparisons • Bonferroni procedure • Use more stringent significance level for each comparison

  6. Which follow-up test? • Planned comparisons • A set of specific comparisons that you “planned” to do in advance of conducting the overall ANOVA – don’t actually need to calculate the “omnibus” ANOVA F statistic in order to test planned comparisons. • General rule of thumb, don’t exceed the number of conditions that you have (or even stick with one fewer), & make sure comparisons are orthogonal (more on this in a minute) • Post-hoc tests • A set of comparisons that you decided to examine only after you find a significant (reject H0) ANOVA

  7. Planned Comparisons • Different types • Simple comparisons - testing two groups • Complex comparisons - testing combined groups • Bonferroni procedure • Use more stringent significance level for each comparison • Basic procedure: • Within-groups population variance estimate (denominator) • Between-groups population variance estimate of the two groups of interest (numerator) • Figure F in usual way

  8. Post-hoc tests • Generally, you are testing all of the possible comparisons (rather than just a specific few) • Different types • Tukey’s HSD test (use if testing only pairwise comparisons) • Scheffe test (use if testing more complex comparisons) • Others (Fisher’s LSD, Neuman-Keuls test, Duncan test) • Generally they differ with respect to how conservative they are.

  9. Planned Comparisons & Post-Hoc Tests as Contrasts A contrast is basically a way of assigning numeric values to your grouping variable in a manner that allows you to test a specific difference between two means (or between one mean and a weighted average of two or more other means).

  10. Contrasts for follow-up tests Think of the formula for independent samples t-tests: Typically, we are testing the null hypothesis that µA- µB= 0, so the numerator reduces to the difference between the two sample means. This is a simple contrast comparing two groups. The contrast is an array of multipliers defining a linear combination of the means. In this case, the array is (1,-1). The mean of sample A is multiplied by 1, and the mean of sample B is multiplied by -1, and the two products are added together, forming the numerator of the t statistic.

  11. Contrasts for follow-up tests In the case of one-way ANOVA, contrasts can be used in a similar way to test specific statements about the equality or inequality of particular means in the analysis. In a one-way ANOVA with three groups, contrasts such as: (0,1,-1) , (1,0,-1), or (1,-1,0) define linear combinations of means that test whether a specified pair of means is equal, while ignoring the third mean. The null hypothesis is that the linear combination of means is equal to zero (similar to independent-samples t-test).

  12. Contrasts for follow-up tests Contrasts can also define linear combinations of means to test more complex hypotheses (such as mean one is equal to the average of mean two and mean three). For example: (1, -.5, -.5) The weights must sum to 0, because the null hypothesis is always that the linear combination of means defined by the contrast is equal to 0.

  13. Contrasts for follow-up tests Follow-up tests are not always independent of each other. The hypothesis that A>B=C is not independent of the hypothesis that A>B>C, because both include the inequality A>B. For planned comparisons, contrasts tested should be independent. Independence of contrasts can be tested by summing the cross-products of the elements (see next slide)

  14. Contrasts for follow-up tests Consider two contrasts testing first the pairwise comparison between means a and c, and second whether mean b is equal to the average of means and c: (1,0,-1) (.5,-1,.5) Sum of cross-products = (1*.5) + (0*-1) + (-1*.5) = .5+0+(-.5)=0, so these are independent. Consider contrasts testing one pairwise comparison (b vs. c) and one comparison between average of means a and b vs. mean c: (.5,.5,-1) (0,1,-1) Sum of cross-products = (.5*0) + (.5*1) + (-1*-1) =0+.5+1=1.5, so these contrasts are not independent (both are comparing b and c).

  15. Contrasts for follow-up tests SPSS will let you specify contrasts for planned comparisons that are not independent, but this is not recommended practice. If you want to test several dependent contrasts, you should use a post-hoc correction such as Scheffe. If you want to test all pairwise comparisons, you can use Tukey’s HSD correction.

  16. Fixed vs. Random Factors in ANOVA • One-way ANOVAs can use grouping variables that are fixed or random. • Fixed: All levels of the variable of interest are represented by the variable (e.g., treatment and control, male and female). • Random: The grouping variable represents a random selection of levels of that variable, sampled from a population of levels (e.g., observers). • For one-way ANOVA, the math is the same either way, but the logic of the test is a little different. (Testing either that means are equal or that the between group variance is 0)

  17. ANOVA in SPSS • Let’s see how to do a between groups 1-factor ANOVA in SPSS (and the other tests too) • Analyze=>Compare Means=>One-Way ANOVA • Your grouping variable is the “factor” and your continuous (outcome) variable goes in the “dependent list” box. • Specify contrasts for planned comparisons • Specify any post-hoc tests you want to run • Under “options,” you can request descriptive statistics (e.g., to see group means)

  18. Within groups (repeated measures) ANOVA • Basics of within groups ANOVA • Repeated measures • Matched samples • Computations • Within groups ANOVA in SPSS

  19. Example • Suppose that you want to compare three brand name pain relievers. • Give each person a drug, wait 15 minutes, then ask them to keep their hand in a bucket of cold water as long as they can. The next day, repeat (with a different drug) • Dependent variable: time in ice water • Independent variable: 4 levels, within groups • Placebo • Drug A • Drug B • Drug C

  20. One group • More than 2 scores per subject • The 1 factor within groups ANOVA: Statistical analysis follows design • Repeated measures

  21. More than 2 groups • Matched groups • The 1 factor within groups ANOVA: Statistical analysis follows design • Repeated measures • One group • More than 2 scores per subject • - OR - • Matched samples

  22. XA XC XP XB Within-subjects ANOVA • n = 5 participants • Each participates in every condition (4 of these)

  23. Within-subjects ANOVA • Hypothesis testing: a five step program • Step 1: State your hypotheses • Step 2: Set your decision criteria • Step 3: Collect your data • Step 4: Compute your test statistics • Compute your estimated variances (2 steps of partitioning used) • Compute your F-ratio • Compute your degrees of freedom (there are even more now) • Step 5: Make a decision about your null hypothesis

  24. Because we use the same people in each condition, we can figure out how much of the variability comes from the individuals and remove it from the analysis XA XC XP XB Step 4: Computing the F-ratio • Analyzing the sources of variance • Describe the total variance in the dependent measure • Why are these scores different? • Sources of variability • Between groups • Within groups • Individual differences • Left over variance (error)

  25. Partitioning the variance Total variance Stage 1 Within groups variance Between groups variance

  26. Partitioning the variance Total variance Stage 1 Within groups variance Between groups variance Between subjects variance Error variance Stage 2

  27. Because we use the same people in each condition, none of this variability comes from having different people in different conditions Partitioning the variance Total variance Stage 1 Within groups variance Between groups variance Treatment effect Error or chance (without individual differences) Individual differences Other error Between subjects variance Error variance Stage 2 Other error (without individual differences) Individual differences

  28. Observed variance F-ratio = Variance from chance Step 4: Computing the F-ratio • The F ratio • Ratio of the between-groups variance estimate to the population error variance estimate

  29. Partitioning the variance Total variance Stage 1 Within groups variance Between groups variance Treatment effect Error or chance (without individual differences) Individual differences Other error Between subjects variance Error variance Stage 2 Other error (without individual differences) Individual differences

  30. Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance

  31. Partitioning the variance

  32. Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance Between subjects variance Error variance Stage 2

  33. What is ? Between subjects variance Partitioning the variance The average score for each person

  34. What is ? Partitioning the variance The average score for each person Between subjects variance

  35. Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance Stage 2 Error variance Between subjects variance

  36. Partitioning the variance Error variance

  37. Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance Stage 2 Error variance Between subjects variance

  38. Recall: Partitioning the variance Now we return to variance. But, we call it Means Square (MS) Mean Squares (Variance) Between groups variance Error variance

  39. Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance Stage 2 Error variance Between subjects variance

  40. Within-subjects ANOVA • The F table • Need two df’s • dfbetween (numerator) • dferror (denominator) • Values in the table correspond to critical F’s • Reject the H0 if your computed value is greater than or equal to the critical F • Separate tables for 0.05 & 0.01 Do we reject or fail to reject the H0? • From the table (assuming 0.05) with 3 and 12 degrees of freedom the critical F = 3.89. • So we reject H0 and conclude that not all groups are the same

  41. Within-subjects ANOVA in SPSS • Setting up the file • Running the analysis • Looking at the output

  42. Within-subjects ANOVA in SPSS • Setting up the file: • Each person has one line of data, with each of the “conditions” represented as different variables. • Our chocolate chip cookie data set is a good example. Each type of cookie (jewel, oatmeal, and chips ahoy) is a different “condition” that everyone in the sample experienced. • We created three “sets” of similar variables, with data from each person entered into all three sets of variable fields.

  43. Within-subjects ANOVA in SPSS • Running the analysis: • Analyze=>General Linear Model=>Repeated Measures • Define your within-subjects factor (give it a name and specify number of levels - this is the number of variables it will be based on, then click on define and select variables for each level of the factor). • Can request descriptives and contrasts (though the contrasts are defined in a different manner)

  44. Within-subjects ANOVA in SPSS • Interpreting the output • Output is complex, and generally set up to accommodate much more complex designs than one-way repeated measures ANOVA, so for current purposes much can be ignored. • Scroll down to where it says: Tests of Within-Subjects Effects & find between group and error sums of squares, df, F, and Sig. for “sphericity assumed.”

  45. Factorial ANOVA • Basics of factorial ANOVA • Interpretations • Main effects • Interactions • Computations • Assumptions, effect sizes, and power • Other Factorial Designs • More than two factors • Within factorial ANOVAs

  46. More than two groups • Independent groups • More than one Independent variable • The factorial (between groups) ANOVA: Statistical analysis follows design

  47. Factorial experiments • Two or more factors • Factors - independent variables • Levels - the levels of your independent variables • 2 x 3 design means two independent variables, one with 2 levels and one with 3 levels • “condition” or “groups” is calculated by multiplying the levels, so a 2x3 design has 6 different conditions

  48. Factorial experiments • Two or more factors (cont.) • Main effects - the effects of your independent variables ignoring (collapsed across) the other independent variables • Interaction effects - how your independent variables affect each other • Example: 2x2 design, factors A and B • Interaction: • At A1, B1 is bigger than B2 • At A2, B1 and B2 don’t differ

  49. Results • So there are lots of different potential outcomes: • A = main effect of factor A • B = main effect of factor B • AB = interaction of A and B • With 2 factors there are 8 basic possible patterns of results: 1) No effects at all 2) A only 3) B only 4) AB only 5) A & B 6) A & AB 7) B & AB 8) A & B & AB

  50. Interaction of AB A1 A2 B1 mean B1 Main effect of B B2 B2 mean A1 mean A2 mean Marginal means Main effect of A 2 x 2 factorial design Condition mean A1B1 What’s the effect of A at B1? What’s the effect of A at B2? Condition mean A2B1 Condition mean A1B2 Condition mean A2B2

More Related