1 / 47

Ch 14 實習 (2)

Ch 14 實習 (2). Randomized Blocks (Two-way) Analysis of Variance. The purpose of designing a randomized block experiment is to reduce the within-treatments variation thus increasing the relative amount of between treatment variation.

drew
Download Presentation

Ch 14 實習 (2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ch 14 實習(2)

  2. Randomized Blocks (Two-way) Analysis of Variance • The purpose of designing a randomized block experiment is to reduce the within-treatments variation thus increasing the relative amount of between treatment variation. • This helps in detecting differences between the treatment means more easily.

  3. Randomized Blocks Block all the observations with some commonality across treatments

  4. Sum of square for treatments Sum of square for blocks Sum of square for error Partitioning the total variability • The sum of square total is partitioned into three sources of variation • Treatments • Blocks • Within samples (Error) Recall. For the independent samples design we have: SS(Total) = SST + SSE SS(Total) = SST + SSB + SSE

  5. SSB= Calculating the sums of squares • Formula for the calculation of the sums of squares SST =

  6. SSB= SST = Calculating the sums of squares • Formula for the calculation of the sums of squares

  7. Mean Squares and Test statistics • To perform hypothesis tests for treatments and blocks we need • Mean square for treatments • Mean square for blocks • Mean square for error Test statistics for treatments Test statistics for blocks

  8. The F test Rejection Regions • Testing the mean responses for treatments F > Fa,k-1,n-k-b+1 • Testing the mean response for blocks F> Fa,b-1,n-k-b+1

  9. [補充] Randomized Blocks ANOVA

  10. ANOVA Table

  11. Example 1 • A randomized block experiment produced the following statistics: k=5 b=12 SST=1500 SSB=1000 SS(Total)=3500 • a. Test to determine whether the treatment means differ. (Use =0.01) • b. Test to determine whether the blocks means differ. (Use =0.01)

  12. Solution 1 ANOVA table a. Rejection region: F>Fα,k-1,n-k-b+1=F0.01,4,44 ≒3.77 F=16.50 >3.77 There is enough evidence to conclude that the treatment means differ b. Rejection region: F>Fα,b-1,n-k-b+1=F0.01,11,44 ≒2.67 Conclusion: F=4.00>2.67 There is enough evidence to conclude that the block means differ

  13. Example 2 • As an experiment to understand measurement error, a statistics professor asks four students to measure the height of the professor, a male student, and a female student. The differences (in centimeters) between the correct dimension and the ones produced by the students are listed here. Can we infer at the 5% significance level that there are differences in the errors between the subjects being measured?

  14. Solution 2 • H0: μ1= μ2= μ3 H1:At least two means differ Rejection region: F>Fα,k-1,n-k-b+1=F0.05,2,6 =5.14 K=3, b=4, grand mean=2.38 =7.3

  15. Solution 2

  16. Two-Factor Analysis of Variance • Example 2 • Suppose in Example 1, two factors are to be examined: • The effects of the marketing strategy on sales. • Emphasis on convenience • Emphasis on quality • Emphasis on price • The effects of the selected media on sales. • Advertise on TV • Advertise in newspapers

  17. Factor A: Marketing strategy Factor B: Advertising media Two-way ANOVA (two factors) Convenience Quality Price City 1 sales City3 sales City 5 sales TV City 2 sales City 4 sales City 6 sales Newspapers

  18. Interaction Difference between the levels of factor A, and difference between the levels of factor B; no interaction Difference between the levels of factor A No difference between the levels of factor B M R e e s a p n o n s e M R e e s a p n o n s e Level 1 of factor B Level 1and 2 of factor B Level 2 of factor B Levels of factor A Levels of factor A 1 2 3 1 2 3 M R e e s a p n o n s e M R e e s a p n o n s e No difference between the levels of factor A. Difference between the levels of factor B Interaction Levels of factor A Levels of factor A 1 2 3 1 2 3

  19. Interaction B+ B+ B+ B- B- B- B- B+ Without interaction With interaction

  20. Terminology A complete factorial experiment is an experiment in which the data for all possible combinations of the levels of the factors are gathered. This is also known as a two-way classification. The two factors are usually labeled A & B, with the number of levels of each factor denoted by a & b respectively. The number of observations for each combination is called a replicate, and is denoted by r. For our purposes, the number of replicates will be the same for each treatment, that is they are balanced.

  21. Hypothesis • H0: Factor A and Factor B do not interact to affect the mean responses • H1: Factor A and Factor B do interact to affect the mean responses • H0: The means of the a levels of factor A are equal • H1: At least two means differ • H0: The means of the b levels of factor B are equal • H1: At least two means differ

  22. Sums of squares

  23. Sums of squares

  24. 補充

  25. MS(AB) MSE MS(A) MSE MS(B) MSE F= F= F= F tests for the Two-way ANOVA • Test for the difference between the levels of the main factors A and B SS(A)/(a-1) SS(B)/(b-1) SSE/(n-ab) Rejection region: F > Fa,a-1 ,n-ab F > Fa, b-1, n-ab • Test for interaction between factors A and B SS(AB)/(a-1)(b-1) Rejection region: F > Fa,(a-1)(b-1),n-ab

  26. ANOVA Table n = abr

  27. Example 3 • The following data were generated from a 2 X 2 factorial experiment with 3 replicates.

  28. Example 3 - continued • a. Test at the 5% significance level to determine whether factors A and B interact. • b. Test at the 5% significance level to determine whether differences exists between the levels of factor A. • c. Test at the 5% significance level to determine whether differences exist between the levels of factor B.

  29. Solution 3

  30. Solution 3 - continued • a F = .31, p-value = .5943. There is not enough evidence to conclude that factors A and B interact. • b F = 1.23, p-value = .2995. There is not enough evidence to conclude that differences exist between the levels of factor A. • c F = 13.00, p-value = .0069. There is enough evidence to conclude that differences exist between the levels of factor B.

  31. Solution 3 - continued

  32. Example 4 • The required conditions for a two-factor ANOVA are that the distribution of the response is __________________ distributed; the variance for each treatment is ________; and the samples are _______ . (a) normally; equal; independent (b) normally; the same; independent (c) normally; identical; independent

  33. Multiple Comparisons • Two means are considered different if the difference between the corresponding sample means is larger than a critical number. Then, the larger sample mean is believed to be associated with a larger population mean. • Conditions common to all the methods here: • The ANOVA model is the one way analysis of variance • The conditions required to perform the ANOVA are satisfied.

  34. Inference about m1– m2: Equal variances • Recall • Construct the t-statistic as follows: Build a confidence interval

  35. Fisher Least Significant Different (LSD) Method • This method builds on the equal variances t-test of the difference between two means. • The test statistic is improved by using MSE rather than sp2. • We can conclude that mi and mj differ (at a% significance level if > LSD, where

  36. Experimentwise Type I error rate (aE)(the effective Type I error) • The Fisher’s method may result in an increased probability of committing a type I error. • The experimentwise Type I error rate is the probability of committing at least one Type I error at significance level of a. It iscalculated by aE = 1-(1 –a)C where C is the number of pairwise comparisons (i.e. C = k(k-1)/2) • The Bonferroni adjustment determines the required Type I error probability per pairwise comparison (a) ,to secure a pre-determined overall aE.

  37. Bonferroni Adjustment • The procedure: • Compute the number of pairwise comparisons (C)[C=k(k-1)/2], where k is the number of populations. • Set a = aE/C, where aE is the true probability of making at least one Type I error (called experimentwise Type I error). • We can conclude that mi and mj differ (at a/C% significance level if

  38. Fisher and Bonferroni Methods • Example 1 - continued • Rank the effectiveness of the marketing strategies(based on mean weekly sales). • Use the Fisher’s method, and the Bonferroni adjustment method • Solution (the Fisher’s method) • The sample mean sales were 577.55, 653.0, 608.65. • Then,

  39. Fisher and Bonferroni Methods • Solution (the Bonferroni adjustment) • We calculate C=k(k-1)/2 to be 3(2)/2 = 3. • We set a = .05/3 = .0167, thus t.0167/2, 60-3 = 2.467 (Excel). Again, the significant difference is between m1 and m2.

  40. Tukey Multiple Comparisons • The test procedure: • Find a critical number w as follows: k = the number of treatments n =degrees of freedom = n - k ng = number of observations per sample (recall, all the sample sizes are the same) a = significance level qa(k,n) = a critical value obtained from the studentized range table

  41. If the sample sizes are not extremely different, we can use the above procedure with ng calculated as theharmonic mean of the sample sizes. Tukey Multiple Comparisons • Select a pair of means. Calculate the difference between the larger and the smaller mean. • If there is sufficient evidence to conclude that mmax > mmin . • Repeat this procedure for each pair of samples. Rank the means if possible.

  42. Which Multiple Comparison Method to Use • If you have identified two or three pairwise comparison, use the Bonferroni method. • If you plan to compare all possible combinations, use Turkey. • If the purpose of the analysis is to point to areas that should be investigated further, Fisher’s LSD method is indicated.

  43. Example 5 • a. Use Fisher’s LSD procedure with =0.05 to determine which population means differ given the following statistics. • b. Repeat part a using the Bonferroni adjustment. • c. Repeat part a using Tukey’s multiple comparison method

  44. Solution 5

  45. Solution 5 - continued

  46. Solution 5 - continued

  47. Example 6 • Which of the following statements about multiple comparison methods is false? a. They are to be use once the F-test in ANOVA has been rejected. b. They are used to determine which particular population means differ. c. There are many different multiple comparison methods but all yield the same conclusions. d. All of these choices are true.

More Related