**Chapter 13Introduction to Analysis of Variance** PowerPoint Lecture SlidesEssentials of Statistics for the Behavioral SciencesSeventh Editionby Frederick J. Gravetter and Larry B. Wallnau

**Chapter 13 Learning Outcomes**

**Concepts to review** • Variability (Chapter 4) • Sum of squares • Sample variance • Degrees of freedom • Introduction to hypothesis testing (Chapter 8) • The logic of hypothesis testing • Independent measures t statistic (Chapter 10)

**13.1 Introduction to Analysis of Variance** • Analysis of variance • Used to evaluate mean differences between two or more treatments • Uses sample data as basis for drawing general conclusions about populations • Advantage of t test: it can be used to compare more than two treatments at a time

**Figure 13.1 Typical situation in which ** ANOVA would be used

**Terminology** • Factor • The independent (or quasi-independent) variable that designates the groups being compared • Levels • Individual conditions or values that make up a factor • Factorial design • A study that combines two or more factors

**Figure 13.2 Research design with two** factors

**Statistical hypotheses for ANOVA** • Null hypothesis: the level or value on the factor does not affect the dependent variable. • In the population, the means of the groups do not differ from each other.

**Alternate hypothesis for ANOVA** • H1: There is at least one mean difference among the populations • Several equations are possible • All means are different • Some means are not different, but others are

**Test statistic for ANOVA** • F ratio based on variance instead of sample mean differences

**Test statistic for ANOVA** • Not possible to compute a sample mean difference between more than two samples • F ratio based on variance instead of sample mean difference • Variance is used to define and measure the size of differences among the sample means(numerator) • Variance in the denominator measures the mean differences that would be expected if there is no treatment effect.

**13.2 Logic of Analysis of Variance** • Between-treatments variance • Variability results from general differences between the treatment conditions • Variance between treatments measures differences among sample means • Within-treatments variance • Variability within each sample • Individual scores are not the same within each sample

**Sources of variability between treatments** • Systematic differences caused by treatments • Random, unsystematic differences • Individual differences • Experimental (measurement) error

**Sources of variability within-treatments** • No systematic differences related to treatment groups occur within each group • Random, unsystematic differences • Individual differences • Experimental (measurement) error

**Figure 13.3 Partition of total variability ** into two components

**F-ratio** • If H0 is true: • Size of treatment effect is near zero • F is near to 1.00 • If H1 is true: • Size of treatment effect is near more than 0. • F is noticeably larger than 1.00 • Denominator of the F-ratio is called the error term

**Learning Check** • Decide if each of the following statements is True or False.

**Answer**

**13.3 ANOVA notation and formulas** • Number of treatment conditions: k • Number of scores in each treatment: n1, n2 etc • Total number of scores: N • When all samples are same size, N = kn • Sum of scores (ΣX) for each treatment: T • Grand total of all scores in study: G = ΣT • There is no universally accepted notation for ANOVA. Other sources may use other symbols.

**Figure 13.4 Structure and sequence of ** calculations for ANOVA

**Figure 13.5 Partitioning the SSfor ** independent-measures ANOVA

**ANOVA equations**

**Analysis of degrees of freedom** • Total degrees of freedomdftotal= N – 1 • Within-treatments degrees of freedom dfwithin= N – k • Between-treatments degrees of freedom dfbetween= k – 1

**Figure 13.6 Partitioning the degrees ** of freedom

**Mean squares and F-ratio**

**Learning Check** • An analysis of variance produces SStotal = 80 and SSwithin = 30. For this analysis, what is SSbetween?

**Learning Check - Answer** • An analysis of variance produces SStotal = 80 and SSwithin = 30. For this analysis, what is SSbetween?

**13.4 Distribution of F-ratios** • If the null hypothesis is true, the value of F will be around 1.00 • Because F-ratios are computed from two variances, they are always positive numbers. • Table of F values is organized by two df • df numerator (between) • df denominator (within)

**Figure 13.7 Distribution of F-ratios **

**13.5 Examples of Hypothesis Testing and Effect** Size • Hypothesis tests use the same four steps that have been used in earlier hypothesis tests. • Computation of the test statistic F is donein stages • Compute SStotal, SSbetween, SSwithin • Compute MStotal, MSbetween, MSwithin • Compute F

**Figure 13.8 Distribution of F-ratios with ** critical region for α = .05

**Effect size for ANOVA** • Compute percentage of variance accounted for by the treatment conditions • In published reports of ANOVA, usuallycalled η2 • Same concept as r2

**Figure 13.9 Visual representation of ** between and within variability

**MSwithin and pooled variance** • In the t-statistic and in the F-ratio, the variances from the separate samples are pooled together to create one average value for the sample variance • Numerator of F-ratio measures how much difference exists between treatment means. • Denominator measures the variance of the scores inside each treatment

**13.6 Post hoc tests** • ANOVA compares all individual mean differences simultaneously, in one test • A significant F-ratio indicates that at least one among the mean differences is statistically significant. • Does not indicate which means differ significantly from each other • Post hoctests are additional tests done to determine exactly which mean differences are significant, and which are not.

**Experimentwise Alpha ** • Post hoc tests compare two individual means at a time (pairwise comparison) • Each comparison includes risk of a Type I error • Risk of Type I error accumulates and is called the experimentwise alpha level. • Increasing the number of hypothesis tests increases the total probability of a Type I error • Post hoc posttests use special methods to try to control Type I errors

**Tukey’s Honestly Significant Difference** • A single value that determines the minimum difference between treatment means that is necessary for significance • Honestly Significant Difference (HSD)

**The Scheffé Test** • The Scheffé test is one of the safest of all possible post hoc tests • Uses an F-ratio to evaluate significance of the difference between two treatment conditions

**Learning Check** • Which combination of factors is most likely to produce a large value for the F-ratio?

**Learning Check - Answer** • Which combination of factors is most likely to produce a large value for the F-ratio?

**Learning Check** • Decide if each of the following statements is True or False.

**Answer**

**13.7 Relationship between ANOVA and t tests** • For two independent samples, either t or F can be used • Always result in same decision • F = t2 • Point

**Figure 13.10 Distribution of t and F** statistics

**Assumptions for the Independent Measures ANOVA** • The observations within each sample must be independent • The population from which the samples are selected must be normal • The populations from which the samples are selected must have equal variances(homogeneity of variance) The assumption of homogeneity of variance is an important one.