1 / 26

Hypothesis test flow chart

Hypothesis test flow chart. χ 2 test for i ndependence (19.9) Table I. Test H 0 : r =0 (17.2) Table G . n umber of correlations. n umber of variables. f requency data. c orrelation (r). 1. 2. Measurement scale. 1. 2. b asic χ 2 test (19.5) Table I .

von
Download Presentation

Hypothesis test flow chart

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hypothesis test flow chart χ2 test for independence (19.9) Table I Test H0: r=0 (17.2) Table G number of correlations number of variables frequency data correlation (r) 1 2 Measurement scale 1 2 basic χ2 test (19.5) Table I Test H0: r1= r2 (17.4) Tables H and A START HERE Means 2-way ANOVA Ch 21 Table E z -test (13.1) Table A More than 2 1 2 number of means Yes Do you know s? number of factors No 2 1 t -test (13.14) Table D 1-way ANOVA Ch 20 Table E independent samples? Yes No Test H0: m1= m2 (15.6) Table D Test H0: D=0 (16.4) Table D

  2. Chapter 18: Testing for difference among three or more groups: One way Analysis of Variance (ANOVA) Note: we’ll be skipping sections 18.10, 18.12, 18.13, 18.14, 1.15, 18.16, 18.17 18.18, and 18.19 from the book A B C Suppose you wanted to compare the results of three tests (A, B and C) to see if there was any differences difficulty. To test this, you randomly sample these ten scores from each of the three populations of test scores. How would you test to see if there was any difference across the mean scores for these three tests? The first thing is obvious – calculate the mean for each of the three samples of 10 scores. But then what? You could run an two-sample t-test on each of the pairs (A vs. B, A vs. C and B vs. C). 84 85 62 78 62 79 93 62 83 76 74 79 92 71 80 68 81 74 79 69 84 76 87 67 81 68 67 87 61 75 Means 81 72 75

  3. A B C You could run an two-sample t-test on each of the pairs (A vs. B, A vs. C and B vs. C). There are two problems with this: The three tests wouldn’t be truly independent of each other, since they contain common values, and We run into the problem of making multiple comparisons: If we use an a value of .05, the probability of obtaining at least one significant comparison by chance is 1-(1-.05)3, or about .14 84 85 62 78 62 79 93 62 83 76 74 79 92 71 80 68 81 74 79 69 84 76 87 67 81 68 67 87 61 75 Means 81 72 75 So how do we test the null hypothesis: H0: mA = mB = mC ?

  4. So how do we test the null hypothesis: H0: mA = mB = mC ? In the 1920’s Sir Ronald Fisher developed a method called ‘Analysis of Variance’ or ANOVA to test hypotheses like this. The trick is to look at the amount of variability between the means. So far in this class, we’ve usually talked about variability in terms of standard deviations. ANOVA’s focus on variances instead, which (of course) is the square of the standard deviation. The intuition is the same. The variance of these three mean scores (81, 72 and 75) is 22.5 Intuitively, you can see that if the variance of the means scores is ‘large’, then we should reject H0. But what do we compare this number 22.5 to? A B C 84 85 62 78 62 79 93 62 83 76 74 79 92 71 80 68 81 74 79 69 84 76 87 67 81 68 67 87 61 75 Means 81 72 75

  5. So how do we test the null hypothesis: H0: mA = mB = mC ? The variance of these three mean scores (81, 72 and 75) is 22.5 How ‘large’ is 22.5? Suppose we knew the standard deviation of the population of scores (s). If the null hypothesis is true, then all scores across all three columns are drawn from a population with standard deviation s. It follows that the mean of n scores should be drawn from a population with standard deviation: This means multiplying the variance of the means by n gives us an estimate of the variance of the population. A B C 84 85 62 78 62 79 93 62 83 76 74 79 92 71 80 68 81 74 79 69 84 76 87 67 81 68 67 87 61 75 Means With a little algebra: 81 72 75

  6. The variance of these three mean scores (81, 72 and 75) is 22.5 Multiplying the variance of the means by n gives us an estimate of the variance of the population. For our example, A B C 84 85 62 78 62 79 We typically don’t know what s2 is. But like we do for t-tests, we can use the variance within our samples to estimate it. The variance of the 10 numbers in each column (61, 94, and 55) should each provide an estimate of s2. We can combine these three estimates of s2 by taking their average, which is 70. 93 62 83 76 74 79 92 71 80 68 81 74 79 69 84 76 87 67 81 68 67 87 61 75 Means n x Variance of means 81 72 75 225 Variances Mean of variances 61 94 55 70

  7. If H0: mA = mB = mC is true, we now have two separate estimates of the variance of the population (s2). One is n times the variance of the means of each column. The other is the mean of the variances of each column. If H0 is true, then these two numbers should be, on average, the same, since they’re both estimates of the same thing (s2). For our example, these two numbers (225 and 70) seem quite different. Remember our intuition that a large variance of the means should be evidence against H0. Now we have something to compare it to. 225 seems large compared to 70. A B C 84 85 62 78 62 79 93 62 83 76 74 79 92 71 80 68 81 74 79 69 84 76 87 67 81 68 67 87 61 75 Means n x Variance of means 81 72 75 225 Variances Mean of variances 61 94 55 70

  8. When conducting an ANOVA, we compute the ratio of these two estimates of s2. This ratio is called the ‘F statistic’. For our example, 225/70 = 3.23. If H0 is true, then the value of F should be around 1. If H0 is not true, then F should be significantly greater than 1. We determine how large F should be for rejecting H0 by looking up Fcrit in Table E. F distributions depend on two separate degrees of freedom – one for the numerator and one for the denominator. df for the numerator is k-1, where k is the number of columns or ‘treatments’. For our example, df is 3-1 =2. df for the denominator is N-k, where N is the total number of scores. In our case, df is 30-3 = 27. A B C 84 85 62 78 62 79 93 62 83 76 74 79 92 71 80 68 81 74 79 69 84 76 87 67 81 68 67 87 61 75 Fcrit for a = .05 and df’s of 2 and 27 is 3.35. Since Fobs = 3.23 is less than Fcrit, we fail to reject H0. We cannot conclude that the exam scores come from populations with different means. Means n x Variance of means 81 72 75 225 Ratio (F) 3.23 Variances Mean of variances 61 94 55 70

  9. Fcrit for a = .05 and df’s of 2 and 27 is 3.35. Since Fobs = 3.23 is less than Fcrit, we fail to reject H0. We cannot conclude that the exam scores come from populations with different means. Instead of finding Fcrit in Table E, we could have calculated the p-value using our F-calculator. Reporting p-values is standard. Our p-value for F=3.23 with 2 and 27 degrees of freedom is p=.0552 Since our p-value is greater then .05, we fail to reject H0

  10. Example: Consider the following n=12 samples drawn from k=5 groups. Use an ANOVA to test the hypothesis that the means of the populations that these 5 groups were drawn from are different. Answer: The 5 means and variances are calculated below, along with n x variance of means, and the mean of variances. Our resulting F statistic is 15.32. Our two dfs are k-1=4 (numerator) and 60-5 = 55(denominator). Table E shows that Fcrit for 4 and 55 is 2.54. Fobs > Fcrit so we reject H0. A B C D E 68 84 97 79 82 61 67 97 72 90 84 67 76 69 78 78 75 107 76 65 93 85 111 74 65 76 62 104 66 79 92 62 87 78 72 68 74 104 83 81 79 71 108 91 86 76 81 104 75 64 81 69 105 70 51 87 87 99 78 91 Means n x Variance of means 78 74 100 76 75 1429 Ratio (F) 15.32 Variances Mean of variances 96 78 97 46 149 93

  11. What does the probability distribution F(dfb,dfw) look like? F(2,5) F(2,10) F(2,50) F(2,100) 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 F(10,5) F(10,10) F(10,50) F(10,100) 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 F(50,5) F(50,10) F(50,50) F(50,100) 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5

  12. For a typical ANOVA, the number of samples in each group may be different, but the intuition is the same - compute F which is the ratio of the variance of the means over the mean of the variances. Formally, the variance is divided up the following way: Given a table of k groups, each containing ni scores (i= 1,2, …, k), we can represent the deviation of a given score, X from the mean of all scores, called the grand mean as: Deviation of X from the grand mean Deviation of X from the mean of the group Deviation of the mean of the group from the grand mean

  13. The total sums of squares can be partitioned into two numbers: Between-groups sum of squares: SSbetween Total sum of squares: SStotal Within-groups sum of squares: SSwithin SSbetween (or SSbet) is a measure of the variability between groups. It is used as the numerator in our F-tests The variance between groups is calculated by dividing SSbet by its degrees of freedom dfbet = k-1 s2bet=SSbet/dfbet and is another estimate of s2 if H0 is true. This is essentially n times the variance of the means. If H0 is not true, then s2bet is an estimate of s2 plus any ‘treatment effect’ that would add to a difference between the means. • .

  14. The total sums of squares can be partitioned into two numbers: Between-groups sum of squares: SSbetween Total sum of squares: SStotal Within-groups sum of squares: SSwithin SSwithin (or SSw) is a measure of the variability within each group. It is used as the denominator in all F-tests. The variance within each group is calculated by dividing SSwithin by its degrees of freedom dfw = ntotal – k s2w=SSw/dfw This is an estimate of s2 This is essentially the mean of the variances within each group. (It is exactly the mean of variances if our sample sizes are all the same.)

  15. The F ratio is calculated by dividing up the sums of squares and dfinto ‘between’ and ‘within’ SStotal = SSwithin + SSbetween Variances are then calculated by dividing SS by df s2between=SSbetween/dfbetween SStotal s2within=SSwithin/dfwithin SSwithin SSbetween F is the ratio of variances between and within dftotal = dfwithin + dfbetween dftotal =ntotal-1 dfwithin =ntotal-k dfbetweenk-1

  16. Finally, the F ratio is the ratio of s2bet and s2bet We can write all these calculated values in a summary table like this:

  17. Calculating SStotal grand mean: A B C D E 68 84 97 79 82 61 67 97 72 90 84 67 76 69 78 78 75 107 76 65 93 85 111 74 65 76 62 104 66 79 92 62 87 78 72 68 74 104 83 81 79 71 108 91 86 76 81 104 75 64 81 69 105 70 51 87 87 99 78 91 Means n x Variance of means 78 74 100 76 75 1429 Ratio (F) 15.32 Variances Mean of variances 96 78 97 46 149 93

  18. Calculating SSbetand s2bet A B C D E 68 84 97 79 82 61 67 97 72 90 84 67 76 69 78 78 75 107 76 65 93 85 111 74 65 76 62 104 66 79 92 62 87 78 72 68 74 104 83 81 79 71 108 91 86 76 81 104 75 64 81 69 105 70 51 87 87 99 78 91 Means n x Variance of means 78 74 100 76 75 1429 Ratio (F) 15.32 Variances Mean of variances 96 78 97 46 149 93

  19. Calculating SSwand s2w A B C D E 68 84 97 79 82 61 67 97 72 90 84 67 76 69 78 78 75 107 76 65 93 85 111 74 65 76 62 104 66 79 92 62 87 78 72 68 74 104 83 81 79 71 108 91 86 76 81 104 75 64 81 69 105 70 51 87 87 99 78 91 Means n x Variance of means 78 74 100 76 75 1429 Ratio (F) 15.32 Variances Mean of variances 96 78 97 46 149 93

  20. Fcrit with dfs of 4 and 55 and a = .05 is 2.54 Our decision is to reject H0 since 15.32 > 2.54 Calculating F A B C D E 68 84 97 79 82 61 67 97 72 90 84 67 76 69 78 78 75 107 76 65 93 85 111 74 65 76 62 104 66 79 92 62 87 78 72 68 74 104 83 81 79 71 108 91 86 76 81 104 75 64 81 69 105 70 51 87 87 99 78 91 Means n x Variance of means 78 74 100 76 75 1429 Ratio (F) 15.32 Variances Mean of variances 96 78 97 46 149 93

  21. Example: Female students in this class were asked how much they exercised, given the choices: Just a little A fair amount Very much Is there a significant difference in the heights of students across these four categories? (Use a = .05) In other words: H0: mA = mB = mC Summary statistics are:

  22. Means and standard errors for ‘1-way’ ANOVAs can be plotted as bar graphs with error bars indicating ±1 standard error of the mean.

  23. Filling in the table for the ANOVA: SSWand s2w SSw = 139.3+416.11+58.89 = 614.3

  24. Filling in the table for the ANOVA: There are two ways of calculating SSbet

  25. Filling in the table for the ANOVA: There are two ways of calculating SSbet Or, use the fact that SStotal = SSwithin + SSbetween or SSbetween = SStotal - SSwithin = 668.43-614.3= 54.13

  26. Filling in the table for the ANOVA: F Fcrit for dfs of 2 and 66 and a = .05 is 3.14 Since Fcrit is greater than our observed value of F, we fail to reject H0and conclude that the female student’s average height does not significantly differ across the amount of exercise they get.

More Related