1 / 62

Chapter 3 Experiments with a Single Factor: The Analysis of Variance

Chapter 3 Experiments with a Single Factor: The Analysis of Variance. 3.1 An Example. Chapter 2: A signal-factor experiment with two levels of the factor Consider signal-factor experiments with a levels of the factor, a  2 Example: The tensile strength of a new synthetic fiber.

myra
Download Presentation

Chapter 3 Experiments with a Single Factor: The Analysis of Variance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 Experiments with a Single Factor: The Analysis of Variance

  2. 3.1 An Example • Chapter 2: A signal-factor experiment with two levels of the factor • Consider signal-factor experiments with a levels of the factor, a  2 • Example: • The tensile strength of a new synthetic fiber. • The weight percent of cotton • Five levels: 15%, 20%, 25%, 30%, 35% • a = 5 and n = 5

  3. Does changing the cotton weight percent change the mean tensile strength? • Is there an optimum level for cotton content?

  4. An Example (See pg. 61) • An engineer is interested in investigating the relationship between the RF power setting and the etch rate for this tool. The objective of an experiment like this is to model the relationship between etch rate and RF power, and to specify the power setting that will give a desired target etch rate. • The response variable is etch rate.

  5. She is interested in a particular gas (C2F6) and gap (0.80 cm), and wants to test four levels of RF power: 160W, 180W, 200W, and 220W. She decided to test five wafers at each level of RF power. • The experimenter chooses 4 levels of RF power 160W, 180W, 200W, and 220W • The experiment is replicated 5 times – runs made in random order

  6. Does changing the power change the mean etch rate? • Is there an optimum level for power? • We would like to have an objective way to answer these questions • The t-test really doesn’t apply here – more than two factor levels

  7. 3.2 The Analysis of Variance • a levels (treatments) of a factor and n replicates for each level. • yij: the jth observation taken under factor level or treatment i.

  8. Models for the Data • Means model: • yij is the ijth observation, • i is the mean of the ith factor level • ij is a random error with mean zero • Let μi = μ + τ i ,  is the overall mean and τ i is the ith treatment effect • Effects model:

  9. Linear statistical model • One-way or Signal-factor analysis of variance model • Completely randomized design: the experiments are performed in random order so that the environment in which the treatment are applied is as uniform as possible. • For hypothesis testing, the model errors are assumed to be normally and independently distributed random variables with mean zero and variance, σ 2, i.e. yij ~ N(μ + τi, σ 2) • Fixed effect model: a levels have been specifically chosen by the experimenter.

  10. 3.3 Analysis of the Fixed Effects Model • Interested in testing the equality of the a treatment means, and E(yij) = μ i = μ + τi, i = 1,2, …, a H0: μ 1 = … = μ av.s. H1: μ i≠μ j, for at least one pair (i, j) • Constraint: • H0: τ1 = … = τa =0 v.s. H1: τ i≠ 0, for at least one i

  11. Notations: 3.3.1 Decomposition of the Total Sum of Squares • Total variability into its component parts. • The total sum of squares (a measure of overall variability in the data) • Degree of freedom: an – 1 = N – 1

  12. SSTreatment: sum of squares of the differences betweenthe treatment averages (sum of squares due to treatments) and the grand average, and a – 1 degree of freedom • SSE: sum of squares of the differences of observations within treatments from the treatment average (sum of squares due to error), and a(n - 1) = N – a degrees of freedom.

  13. A large value of SSTreatments reflects large differences in treatment means • A small value of SSTreatments likely indicates no differences in treatment means • dfTotal = dfTreatment + dfError • No differences between a treatment means: variance cane be estimated by

  14. Mean squares: 3.3.2 Statistical Analysis • Assumption: εij are normally and independently distributed with mean zero and variance σ 2 • Cochran’s Thm (p. 69)

  15. SST/σ2 ~ Chi-square (N – 1), SSE/σ2 ~ Chi-square (N – a), SSTreatments/σ2 ~ Chi-square (a – 1), and SSE/σ2 and SSTreatments/σ2 are independent (Theorem 3.1) • H0: τ1 = … = τa =0 v.s. H1: τi≠ 0, for at least one i

  16. Reject H0 if F0 > F α, a-1, N-a • Rewrite the sum of squares: • See page 71 • Randomization test

  17. ANOVA Table of Example 3-1

  18. 3.3.3 Estimation of the Model Parameters • Model: yij = μ + τ i + ε ij • Estimators: • Confidence intervals:

  19. Example 3.3 (page 74) • Simultaneous Confidence Intervals (Bonferroni method): Construct a set of r simultaneous confidence intervals on treatment means which is at least 100(1-): 100(1-/r) C.I.’s 3.3.4 Unbalanced Data • Letni observations be taken under treatment i, i=1,2,…,a, N = i ni,

  20. 1. The test statistic is relatively insensitive to small departures from the assumption of equal variance for the a treatments if the sample sizes are equal. 2. The power of the test is maximized if the samples are of equal size.

  21. 3.4 Model Adequacy Checking • Assumptions: yij ~ N(μ + τ i, σ2) • The examination of residuals • Definition of residual: • The residuals should be structureless.

  22. 3.4.1 The Normality Assumption • Plot a histogram of the residuals • Plot a normal probability plot of the residuals • See Table 3-6

  23. May be • the left tail of error is thinner than the tail part of standard normal • Outliers • The possible causes of outliers: calculations, data coding, copy error,…. • Sometimes outliers are more informative than the rest of the data.

  24. Detect outliers: Examine the standardized residuals, 3.4.2 Plot of Residuals in Time Sequence • Plotting the residuals in time order of data collection is helpful in detecting correlation between the residuals. • Independence assumption

  25. Nonconstant variance: the variance of the observations increases as the magnitude of the observation increase, i.e. yij 2 • If the factor levels having the larger variance also have small sample sizes, the actual type I error rate is larger than anticipated. • Variance-stabilizing transformation

  26. Statistical Tests for Equality Variance: • Bartlett’s test: • Reject null hypothesis if

  27. Example 3.4: the test statistic is • Bartlett’s test is sensitive to the normality assumption • The modified Levene test: • Use the absolute deviation of the observation in each treatment from the treatment median. • Mean deviations are equal => the variance of the observations in all treatments will be the same. • The test statistic for Levene’s test is the ANOVA F statistic for testing equality of means.

  28. Example 3.5: • Four methods of estimating flood flow frequency procedure (see Table 3.7) • ANOVA table (Table 3.8) • The plot of residuals v.s. fitted values (Figure 3.7) • Modified Levene’s test: F0 = 4.55 with P-value = 0.0137. Reject the null hypothesis of equal variances.

  29. Let E(y) =  and y • Find y* = y that yields a constant variance. • y*+-1 • Variance-Stabilizing Transformations http://www.stat.ufl.edu/~winner/sta6207/transform.pdf

  30. How to find : • Use • See Figure 3.8, Table 3.10 and Figure 3.9

  31. 3.5 Practical Interpretation of Results • Conduct the experiment => perform the statistical analysis => investigate the underlying assumptions => draw practical conclusion 3.5.1 A Regression Model • Qualitative factor: compare the difference between the levels of the factors. • Quantitative factor: develop an interpolation equation for the response variable.

  32. The Regression Model

  33. 3.5.2 Comparisons Among Treatment Means • If that hypothesis is rejected, we don’t know whichspecificmeans are different • Determining which specific means differ following an ANOVA is called the multiple comparisons problem 3.5.3 Graphical Comparisons of Means

  34. 3.5.4 Contrast • A contrast: a linear combination of the parameters of the form • H0:  = 0 v.s. H1:  0 • Two methods for this testing.

  35. The first method:

  36. The second method:

  37. The C.I. for a contrast,  • Unequal Sample Size

  38. 3.5.5 Orthogonal Contrast • Two contrasts with coefficients, {ci} and {di}, are orthogonal if ci di = 0 • For a treatments, the set of a – 1 orthogonal contrasts partition the sum of squares due to treatments into a – 1 independent single-degree-of-freedom components. Thus, tests performed on orthogonal contrasts are independent. • See Example 3.6 (Page 90)

  39. 3.5.6 Scheffe’s Method for Comparing All Contrasts • Scheffe (1953) proposed a method for comparing any and all possible contrasts between treatment means. • See Page 91 and 92

  40. 3.5.7 Comparing Pairs of Treatment Means • Compare all pairs of a treatment means • Tukey’s Test: • The studentized range statistic: • See Example 3.7

  41. Sometimes overall F test from ANOVA is significant, but the pairwise comparison of mean fails to reveal any significant differences. • The F test is simultaneously considering all possible contrasts involving the treatment means, not just pairwise comparisons. The Fisher Least Significant Difference (LSD) Method • For H0: i = j

  42. The least significant difference (LSD): • See Example 3.8 Duncan’s Multiple Range Test • The a treatment averages are arranged in ascending order, and the standard error of each average is determined as

  43. Assume equal sample size, the significant ranges are • Total a(a-1)/2 pairs • Example 3.9 The Newman-Keuls Test • Similar as Duncan’s multiple range test • The critical values:

  44. 3.5.8 Comparing Treatment Means with a Control • Assume one of the treatments is a control, and the analyst is interested in comparing each of the other a – 1 treatment means with the control. • Test H0: i = a v.s. H1: ia, i = 1,2,…, a – 1 • Dunnett (1964) • Compute • Reject H0 if • Example 3.9

  45. 3.7 Determining Sample Size • Determine the number of replicates to run 3.7.1 Operating Characteristic Curves (OC Curves) • OC curves: a plot of type II error probability of a statistical test,

More Related