Overview of ANOVA. ANOVA. Analysis of variance IV referred to as a factor Conditions also called level or treatment, with k representing the number of levels Thus, refer to as treatment effect One-way ANOVA. Why ANOVA ?.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Imagine you are interested in comparing the mean SAT scores of incoming Freshman at KSU, Georgia Southern, & UGA. How could you test for differences between the groups?
Individual t tests:
KSU vs. Georgia Southern
KSU vs. UGA
Georgia Southern vs. UGA
Why shouldn’t you use multiple t tests to test for all group differences in SAT scores?
Because the experiment-wise error rate will increase
What do we mean by experiment-wise error?
The Type I error rate for a set or family of tests. The probability that at least one of the tests that you conduct will contain a Type I error.
If we conduct 3 t tests, each with an alpha of .05, then the family-wise error rate is 1 – (1 – .05)3 or .143
What is the null hypothesis for our study comparing SAT scores at the 3 Universities?
With ANOVA we test all of the mean differences at one time. If we set alpha to .05 for this one test, then we know the probability of making a Type I error stays at .05 regardless of how many groups we compare.
F = --------------------------------
Between-groups variance: is an estimate of the amount of variance between the groups. It is variance that is presumably the result of manipulation of the independent variable. It is known as treatment or explained variance because it is variance that is explained by the independent variable.
Within-groups variance: is an estimate of the variance within each sample. It is caused by anything unsystematic in your experiment (individual differences, measurement error, etc.)
1. Generate the null hypotheses.
2. Tentatively assume the null hypothesis is correct
3. Choose the appropriate sampling distribution. In this case, the F distribution
4. Collect data and perform the calculations for an F value (compute a source table and then a summary table)
5. Compare the observed F with the critical value of F found in the table with the selected alpha level and appropriate degrees of freedom.
6. If the observed F is larger than the critical value (i.e., the result is unlikely under the null hypothesis) then reject the null hypothesis. If the observed F is smaller than the critical value (i.e., the result is likely under the null hypothesis) then retain the null hypothesis