1 / 81

Chapter 10 - Part 1

Chapter 10 - Part 1. Factorial Experiments. Two-way Factorial Experiments. In Chapter 10, we are studying experiments with two independent variables, each of which will have multiple levels.

tudor
Download Presentation

Chapter 10 - Part 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 10 - Part 1 Factorial Experiments

  2. Two-way Factorial Experiments In Chapter 10, we are studying experiments with two independent variables, each of which will have multiple levels. We call each independent variable a factor. The first IV is called Factor 1 or F1. The second IV is called Factor 2 or F2. So, in this chapter we will study two factor, unrelated groups experiments.

  3. Conceptual Overview • In Chapter 9 you learned to do the F test comparing two estimates of sigma2, MSB and MSW. • F = MSB/MSW • That is one version of a more generic formula. The generic formula tells us what to do in the two factor case. • Here is the generic formula:

  4. Generic formula for the unrelated groups F test • F = SAMP.FLUC. + (?) ONE SOURCE OF VARIANCE) (SAMPLING FLUCTUATION) • Which can also be written as • F = (ID + MP + (?) ONE SOURCE OF VARIANCE) (ID+ MP) Let me explain why.

  5. The denominator of the F test • The denominator in the F test reflects only random sampling fluctuation. It can not reflect the effect of any independent variable or combination of independent variables. • In the unrelated groups F and t test, our best index of sampling fluctuation is MSW, our consistent, least squares, unbiased estimate of sigma2.

  6. The denominator of the F test • Sigma2 underlies sampling fluctuation. If we know sigma2, we can easily tell how far any sample mean should fall from the overall mean (M) and how far multiple means should fall from each other, on the average. • Remember, sigma2 itself is a function of how different each individual score is from the average score. That is, sigma2 reflects that each score is not identical to the others in the same population (or, in this case, in the same group), because of random individual differences and random measurement problems (ID + MP).

  7. Denominator = MSW • In the F tests we are doing MSW serves as our best estimate of sigma2. • Everyone in each specific group is treated the same. • So the only reasons that scores differ from their own group means is that people differ from each other (ID) and there are always measurement problems (MP). • MSW = ID +MP

  8. Numerator of the F ratio: Generic formula • Numerator of the F ratio is an estimate of sigma2 that reflects sampling fluctuation + the possible effects of one difference between the groups. • In Ch. 9, there was only one difference among the ways the groups were treated, the different levels of the independent variable (IV) • MSB reflected the effects of random individual differences (there are different people in each group), random measurement problems, and the effects of the independent variable. • We can write that as MSB = ID + MP + (?)IV

  9. So, in the single factor analysis of variance F = (ID + MP + ?IV)(ID + MP) • Both the numerator and denominator reflect the same elements underlying sampling fluctuation • The numerator includes one, and only one, systematic source of variation not found in the denominator.

  10. This allows us to conclude that: • IF THE NUMERATOR IS SIGNIFICANTLY LARGER THAN THE DENOMINATOR, THE SIZE DIFFERENCE MUST BE MUST BE CAUSED BY THE ONE ADDITIONAL THING PUSHING THE MEANS APART, the IV. • But notice there has to be only one thing different in the numerator and the denominator to make that conclusion inevitable.

  11. Why we can’t use MSB as the numerator in the multifactor analysis of variance • In the two factor analysis of variance, the means can be pushed apart by: • The effects of the first independent variable (F1). • The effects of the second independent variable (F2) • The effects of combining F1 and F2 that are above and beyond the effects of either variable considered alone (INT) • Sampling fluctuation (ID + MP)

  12. So if we compared MSW to MSB in a two factor experiment, here is what we would have. • F = (ID + MP + ?F1 + ?F2 + ?INT) (ID + MP) That’s not an F test. In an F test the numerator must have one and only one source of variation beyond sampling fluctuation. HERE THERE ARE THREE OF THEM!Each of these three things besides sampling fluctuation could be pushing the means apart. So, in the multifactor F test, a ratio between MSB and MSW is meaningless. We must create 3 numerators to compare to MSW, each comprising ID + MP + one other factor

  13. What can we do????We must take apart (analyze) the sums of squares and degrees of freedom betwee groups (SSB and dfB) into component parts. Each part must contain only one factor along with ID and MP. Then each component will yield an estimate of sigma2 that can be compared to MSW in an F test

  14. We start with SSB and dfB and subdivide them – slide 1 • First, we create a way to study the effects of the first factor, ignoring the presence of Factor 2. • We combine groups so that the resulting, larger aggregates of participants differ only because they received different levels of F1. • Each such combined group will include an equal number of people who received the different levels of F2. • So the groups are the same in that regard. • They differ only in terms of how they were treated on the first independent variable, F1.

  15. Each combined group has different people than the other combined group(s). • So the the groups differ because of random individual differences (ID). • Different random measurement problems will appear in each group (MP). • Each combined group received a specific level of F1 and each group has a mean. • Thus the groups will differ from each other because of ID + MP + the possible effects of Factor 1.

  16. Computing MSF1, one of the three numerators in a two factor F test • So, if we find the differences between each person’s mean for his/her combined group and the overall mean, then square and sum the differences, we will have a sum of squares for the first independent variable (SSF1). • Call the number of levels of an independent variable L. df for the combined group equals the number of levels of a factor minus one (LF1 – 1). • An estimate of sigma2 that includes only ID + MP + (?) F1 can be computed by dividing this sum of squares by its degrees of freedom, as usual. • MSF1 = SSF1/dfF1 = (ID + MP + ?F1)

  17. Once you have MSF1, you have one of the three F tests you do in a two factor ANOVA • F = MSF1/MSW

  18. Then you do the same thing to find MSF2 • You combine groups so that you have groups that differ only on F2. • You compare each person’s mean for this combined group to the overall mean, squaring and summing the differences for each person to get SSF1. • Degrees of freedom = the number of levels of Factor 2 minus 1 (dfF2 = LF2 – 1). • Then MSF2 = SSF2/dfF2 • FF2 = MSF2/dfF2

  19. The wholes are equal to the sum of their parts. • Remember, SSB and dfB are the total between group sum of squares and degrees of freedom. Each is composed of three parts, SSF1, SSF2, SSINT and dfF1, dfF2, and dfINT. We are subdividing SSB and dfB into their 3 component parts. • We have already computed SSF1, SSF2, dfF1, and dfF2. • If we subtract the parts of SSB that we have already accounted for (SSF1 & SSF2), the remainder will be THE SUM OF SQUARES FOR THE INTERACTION (SSINT). • If we subtract the parts of dfB that we have already accounted for (dfF1 & dfF2), the remainder will be the degrees of freedom for the interaction (dfINT).

  20. Here’s the formulae • SSB= SSF1 + SSF2 + SSINT • So • SSINT= SSB – (SSF1 + SSF2) • dfB= dfF1 + dfF2 + dfINT • So • dfINT= dfB – (dfF1 + dfF2)

  21. MSINT = SSINT/dfINT • Once you find the sum of squares and degrees for the interaction, you can compute a mean square, as usual, by dividing SS by df. • The mean square for the interaction reflects ID + MP plus the possible effects of combining two variables that are not accounted for by the effects of either one alone or by simply adding the effects of the two.

  22. For example, • For example, alcohol and tranquilizers both can make you intoxicated. • Combine them and you don’t just get more intoxicated. • You can easily wind up dead. • Their effects multiply each other, they don’t just add together. • So the effect of the two variables together is different from the effect of either considered alone. That’s an interaction of two variables.

  23. What’s next? • We have two things to do: • 1. Learn to compute the ANOVA and test three null hypotheses. • 2. Learn more theory

  24. Like Ch. 9, we’ll learn to compute ANOVA first. • Many students find that it begins to make sense, when you see how the numbers come together. • After the computation, we’ll return to theory.

  25. A two factor experiment • Introductory Psychology students are asked to perform an easy or difficult task after they have been exposed to a severely embarrassing, mildly embarrassing, non-embarrassing situation. • The experimenter believes that people use whatever they can to feel good about themselves. • Therefore, those who have been severely embarrassed will welcome the chance to work on a difficult task. • The experimenter hypothesizes that in a non-embarrassing situation, participants will enjoy the easy task more than the difficult task.

  26. Example: Experiment Outline • Population: Introductory Psychology students • Subjects: 24 participants divided equally among 6 treatment groups. • Independent Variables: • Factor 1: Embarrassment levels: severe, mild, none. • Factor 2: Task difficulty levels: hard, easy • Groups: 1=severe, hard task; 2=severe, easy task; 3=mild, hard task; 4=mild, easy task; 5=none, hard; 6=none, easy. • Dependent variable: Subject rating of task enjoyment, where 1 = hating the task and 9 = loving it.

  27. Effects • We are interested in the main effects of embarrassment or task difficulty. Do participants like easy tasks better than hard ones? Do people like tasks differently when embarrassed or unembarrassed. • We are also interested in assessing how combining different levels of both factors affect the response in ways beyond those that can be predicted by considering the effects of each IV separately. This is called the interaction of the independent variables.

  28. Testing the null hypotheses. • Do people undergoing different levels of embarrassment have differential responses to any task? • H0-F1: Levels of embarrassment will not cause differences in liking for the task above and beyond those accounted for by sampling fluctuation. • Do people like easy tasks better than hard ones (or the reverse) irrespective of how embarrassed they are? • H0-F2: Differences in task difficulty will not cause differences in liking for the task above and beyond those accounted for by sampling fluctuation..

  29. Testing the null hypotheses. • Do embarrassment and task difficulty interact such that unembarrassed participants prefer easy tasks while embarrassed ones prefer hard tasks? • H0-INT: There will be no differences in liking for the task caused by combining task difficulty and embarrassment that can not be attributed to the effects of each independent variables considered alone or to sampling fluctuation.

  30. Computational steps • Outline the experiment. (Done) • Define the null and experimental hypotheses. (Done) • Compute the Mean Squares within groups. • Compute the Sum of Squares between groups. • Compute the main effects. • Compute the interaction. • Set up the ANOVA table. • Check the F table for significance. • Interpret the results.

  31. Embarrassment Severe Mild None Hard Task Difficulty Easy A 3X2 STUDY

  32. Compare each score to the mean for its group. Embarrassment Severe Mild None Hard Task Difficulty Easy MSW

  33. 6 6 6 6 4 4 4 4 5 5 5 5 0 -2 2 0 0 -1 0 1 -2 0 2 0 0 4 4 0 0 1 0 1 4 0 4 0 5 5 5 5 4 4 4 4 6 6 6 6 0 -1 -1 2 -1 -1 2 0 -1 0 1 0 0 1 1 4 1 1 4 0 1 0 1 0 Mean Squares Within Groups 1.1 1.2 1.3 1.4 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4 6 4 8 6 4 3 4 5 3 5 7 5 4.1 4.2 4.3 4.4 5.1 5.2 5.3 5.4 6.1 6.2 6.3 6.4 5 4 4 7 3 3 6 4 5 6 7 6

  34. Then we compute a sum of squares and df between groups • This is the same as in Chapter 9 • The difference is that we are going to subdivide SSB and dfB into component parts. • Thus, we don’t use SSB and dfB in our Anova summary table, rather we use them in an intermediate calculation.

  35. Compare each group mean to the overall mean. Embarrassment Severe Mild None Hard Task Difficulty Easy Sum of Squares Between Groups (SSB)

  36. 1 1 1 1 -1 -1 -1 -1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 -1 -1 -1 -1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 Sum of Squares Between Groups (SSB) 6 6 6 6 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 4 4 4 4 6 6 6 6 5 5 5 5 5 5 5 5 5 5 5 5

  37. Next, we answer the questions about each factor having an overall effect. • To get proper between groups mean squares we have to divide the sums of squares and df between groups into components for factor 1, factor 2, and the interaction. • Let’s just look at factor 1. Our question about factor 1 was “Do people undergoing different levels of embarrassment have differential responses to any task?” • We can group participants into all those who were severely embarrassed, all those who were mildly embarrassed, and all those who were not embarrassed.

  38. Compare each score’s Embarrassment mean to the overall mean. Embarrassment Severe Mild None Hard Task Difficulty Easy SSF1: Main Effectof Embarrassment

  39. Calculate Embarrassment Means 1.1 1.2 1.3 1.4 2.1 2.2 2.3 2.4 6 4 8 6 4 3 4 5 3.1 3.2 3.3 3.4 4.1 4.2 4.3 4.4 3 5 7 5 5 4 4 7 5.1 5.2 5.3 5.4 6.1 6.2 6.3 6.4 3 3 6 4 5 6 7 6

  40. Sum of squares and Mean Square for Embarrassment (F1) Severe 1.1 1.2 1.3 1.4 2.1 2.2 2.3 2.4 Emb. 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 No 5.1 5.2 5.3 5.4 6.1 6.2 6.3 6.4 Emb. 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Mild 3.1 3.2 3.3 3.4 4.1 4.2 4.3 4.4 Emb. 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

  41. Factor 2 Then proceed as you just did for Factor 1 and obtain SSF2 and MSF2 where dfF2=LF2 - 1.

  42. Compare each score’s difficulty mean to the overall mean. Embarrassment Severe Mild None Hard Task Difficulty Easy SSF2: Main Effectof Task Difficulty

  43. Calculate Difficulty Means Hard 1.1 1.2 1.3 1.4 3.1 3.2 3.3 3.4 5.1 5.2 5.3 5.4 task 6 4 8 6 4 3 4 5 3 3 6 4 Easy 2.1 2.2 2.3 2.4 4.1 4.2 4.3 4.4 6.1 6.2 6.3 6.4 task 3 5 7 5 5 4 4 7 5 6 7 6

  44. Sum of squares and Mean Square – Task Difficulty 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

  45. Computing the sum of squares and df for the interaction. • SSB contains all the possible effects of the independent variables in addition to the random factors, ID and MP. Here is that statement in equation form • SSB= SSF1 + SSF2 + SSINT • Rearranging the terms: SSINT = SSB - (SSF1+SSF2) or SSINT = SSB- SSF1-SSF2 SSINT is what’s left from the sum of squares between groups (SSB) when the main effects of the two IVs are accounted for. So, subtract SSF1 and SSF2 from overall SSB to obtain the sum of squares for the interaction (SSINT). Then, subtract dfF1 and dfF2 from dfB to obtain dfINT).

  46. Means Squares - Interaction REARRANGE

  47. Testing 3 null hypotheses in the two way factorial Anova

  48. Hypotheses for Embarrassment • Null Hypothesis - H0: There is no effect of embarrassment. Except for sampling fluctuation, the means for liking the task will be the same for the severe, mild, and no embarrassment treatment levels. • Experimental Hypothesis - H1: Embarrassment considered alone will affect liking for the task.

  49. Hypotheses for Task Difficulty • Null Hypothesis - H0: There is no effect of task difficulty. The means for liking the task will be the same for the easy and difficult task treatment levels except for sampling fluctuation. • Experimental Hypothesis - H1: Task difficulty considered alone will affect liking for the task.

  50. Hypotheses for the Interaction of Embarrassment and Task Difficulty • Null Hypothesis - H0: There is no interaction effect. Once you take into account the main effects of embarrassment and task difficulty, there will be no differences among the groups that can not be accounted for by sampling fluctuation. • Experimental Hypothesis - H1: There are effects of combining task difficulty and embarrassment that can not be predicted from either IV considered alone. Such effects might be that: • Those who have been severely embarrassed will enjoy the difficult task more than the easy task. • Those who have not been embarrassed will enjoy the easy task more than the difficult task.

More Related