1 / 62

Section VII Comparing means & analysis of variance

Section VII Comparing means & analysis of variance. How to display means- ok in simple situations. Presenting means - ANOVA data.

ivory
Download Presentation

Section VII Comparing means & analysis of variance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Section VIIComparing means & analysis of variance

  2. How to display means-ok in simple situations

  3. Presenting means - ANOVA data One can also add “error bars” to these means. In analysis of variance, these error bars are based on the sample size and the pooled standard deviation, SDe. This SDe is the same residual SDe as in regression.

  4. Don’t use bar graphs in complex situations 4

  5. Use line graph 5

  6. Comparing MeansTwo groups – t test (review) Mean differences are “statistically significant” (different beyond chance) relative to their standard error (SEd), a measure of mean variability noise”). ___ ____ t = (Y1 - Y2)= “signal” SEd “noise” _ Yi = mean of group i, SEd =standard error of mean difference t is mean difference in SEd units. As |t| increases, p value gets smaller. Rule of thumb: p < 0.05 when |t| > 2 ____ ___ | Y1 - Y2| > tcr SEd = 2 SEd =LSD tcr SEd = 2 SEd is the critical or least significant difference (LSD). So, getting the correct SEd is crucial!! SEd is the “yardstick” for significance

  7. How to compute SEd? SEd depends on n, SD and study design. (example: factorial or repeated measures) For a single mean, if n=sample size _ _____ SEM = SD/n = SD2/n __ __ For a mean difference (Y1 - Y2) The SE of the mean difference, SEd is given by _________________ SEd =  [ SD12/n1 + SD22/n2 ] or ________________ SEd =  [SEM12 + SEM22] If data is paired (before-after), first compute differences (di=Y2i-Y1i) for each person. For paired: SEd =SD(di)/√n

  8. 3 or more groups-analysis of variance (ANOVA) Pooled SDs What if we have many treatment groups, each with its own mean and SD? Group Mean SD sample size (n) __ A Y1 SD1 n1 B Y2 SD2 n2 C Y3 SD3 n3 … __ k Yk SDk nk

  9. Check variance homogeneity

  10. The Pooled SDe SD2pooled error = SD2e = (n1-1) SD12 + (n2-1) SD22 + … (nk-1) SDk2 (n1-1) + (n2-1) + … (nk-1) ____ so, SDe = =  SD2e

  11. In ANOVA - we use pooled SDe to compute SEd and to compute “post hoc” (post pooling) t statistics and p values. ____________________ SEd =  [ SD12/n1 + SD22/n2 ] ____________ = SDe (1/n1) + (1/n2) SD1 and SD2 are replaced by SDp a “common yardstick”. If n1=n2=n, then SEd = SDe2/n=constant

  12. Transformations There are two requirements for the analysis of variance (ANOVA) model. 1. Within any treatment group, the mean should be the middle value. That is, the mean should be about the same as the median. When this is true, the data can usually be reasonably modeled by a Gaussian (“normal”) distribution. 2. The SDs should be similar (variance homogeneity) from group to group. Can plot mean vs median & residual errors to check #1 and mean versus SD to check #2.

  13. What if its not true? Two options: a. Find a transformed scale where it is true. b. Don’t use the usual ANOVA model (use non constant variance ANOVA models or non parametric models). Option “a” is better if possible - more power.

  14. Most common transform is log transformation Usually works for: 1. Radioactive count data 2. Titration data (titers), serial dilution data 3. Cell, bacterial, viral growth, CFUs 4. Steroids & hormones (E2, Testos, …) 5. Power data (decibels, earthquakes) 6. Acidity data (pH), … 7. Cytokines, Liver enzymes (Bilirubin…) In general, log transform works when a multiplicative phenomena is transformed to an additive phenomena.

  15. Compute stats on the log scale & back transform results to original scale for final report. Since log(A)–log(B) =log(A/B), differences on the log scale correspond to ratios on the original scale. Remember 10mean(log data) =geometric mean < arithmetic mean monotone transformation ladder- try these Y2, Y1.5, Y1, Y0.5=√Y, Y0=log(Y), Y-0.5=1/√Y, Y-1=1/Y,Y-1.5, Y-2

  16. Multiplicity & F tests Multiple testing can create “false positives”. We incorrectly declare means are “significantly” different as an artifact of doing many tests even if none of the means are truly different. Imagine we have k=four groups: A, B, C and D. There are six possible mean comparisons: A vs B A vs C A vs D B vs C B vs D C vs D

  17. If we use p < 0.05 as our “significance” criterion, we have a 5% chance of a “false positive” mistake for any one of the six comparisons, assuming that none of the groups are really different from each other. We have a 95% chance of no false positives if none of the groups are really different. So, the chance of a “false positive” in any of the six comparisons is 1 – (0.95)6 = 0.26 or 26%.

  18. To guard against this we first compute the “overall” F statistic and its p value. The overall F statistic compares all the group means to the overall mean (M). __ F =  ni( Yi – M)2/(k-1) =MSx = between group var (SDp)2 MSerror within group var __ __ __ =[n1(Y1 – M)2 + n2(Y2-M)2 + …nk(Yk-M)2]/(k-1) (SDp)2 If “overall” p > 0.05, we stop. Only if the overall p < 0.05 will the pairwise post hoc (post overall) t tests and p values have no more than an overall 5% chance of a “false positive”.

  19. This criterion was suggested by RA Fisher and is called the Fisher LSD (least significant difference) criterion. It is less conservative (has fewer false negatives) than the very conservative Bonferroni criterion. Bonferroni criterion: if making “m” comparisons, declare significant only if p < 0.05/m. This overall F is the same as the overall F test in regression for testing β1=β2=β3=…βk=0 (all regression coeffs=0).

  20. Ex:Clond-time to fall off rod

  21. One way analysis of variancetime to fall data, k= 4 groups, df= k-1

  22. Means & SDs in sec (JMP) No model ANOVA model, pooled SDe=10.986 sec Why are SEMs not the same??

  23. Mean comparisons- post hoc t Means not connected by the same letter are significantly different

  24. Multiple comparisons-Tukey’s q As an alternative to Fisher LSD, for pairwise comparisons of “k” means, Tukey computed percentiles for q=(largest mean-smallest mean)/SEd under the null hyp that all means are equal. If mean diff > q SEd is the significance criterion, type I error is ≤ α for all comparisons. q > t > Z One looks up”t” on the q table instead of the t table.

  25. t vs q for α=0.05, large n num means=k t q* 2 1.96 1.96 3 1.96 2.34 4 1.96 2.59 5 1.96 2.73 6 1.96 2.85 * Some tables give q for SE, not SEd, so must multiply q by √2.

  26. Post hoc: t vs Tukey q, k=4

  27. Mean comparisons-Tukey Means not connected by the same letter are significantly different

  28. One way analysis of variance comparing means across groups-ANOVA vs regrExample: Comparing mean birth weight by race.

  29. ANOVA via regression Coding categorical variables – dummy vs effect coding Below, we create two new variables, “af_am” and “other” from the variable “Race”. Dummy coding - “white” is the referent category Race Af_am other White-1 0 0 Black-2 1 0 Other-3 0 1

  30. Dummy (0,1) coded variables are usually correlated with each other even in balanced designs – not orthogonal. However, they are easier to interpret. Effect coding, ‘white” is the referent category Race Af_am other White-1 -1 -1 Black-2 1 0 Other-3 0 1

  31. In balanced designs, effect coded (-1, 0, 1) variables have zero correlation = they are orthogonal. In balanced designs, effect coded variables have sum and mean zero and cross products of zero. Under effect coding, cell means correspond to Xi = -1 or 1 and marginal means correspond to Xi=0.

  32. ANOVA VIA REGRESSION (dummy vars) Birth weight Overall Analysis of Variance table Sum of Mean Source DF Squares Square F Value p value Model 2 5070608 2535304 4.97 0.0079 Error 186 94846445 509927 Total 188 99917053 Root MSE=SDe= 714.09 gm R-Square 0.0507 Dependent Mean 2944.656

  33. ANOVA via regression - Dummy coding Variable df regr coef SE t p value Intercept 1 3103.74 72.88 42.59 <.0001 af_am 1 -384.05 157.87 -2.43 0.0159 other 1 -299.72 113.68 -2.64 0.0091 Birth wt = 3104 – 384 af_am – 300 other+error With dummy coding, regression coefficients are the mean difference from the referent group (white in this example)

  34. ANOVA via regression (cont)effect coding for race Overall Analysis of Variance table Sum of Mean Source DF Squares Square F Value p value Model 2 5070608 2535304 4.97 0.0079 Error 186 94846445 509927 Total 188 99917053 Root MSE 714.09 R-Square 0.0507 Dependent Mean 2944.656

  35. ANOVA via regression - effect coding Variable df regr coef SE t p value Intercept 1 2875.82 60.13 47.83 <.0001 af_am 1 -156.12 100.76 -1.55 0.1230 other 1 -71.80 78.43 -0.92 0.3612 Birth wt= 2876- 156 af_am – 72 other + error With effect coding, the 2875 is the mean of the race means, the unweighted overall mean. The regression coeffs are the deviations from this overall mean for each factor.

  36. Multiway ANOVA

  37. Balanced designs - ANOVA exampleBrain Weight data, n=7 x 4 = 28, nc=7 obs/cell

  38. Mean brain weights (gms) in Males and Females with and without dementia A balanced* 2 x 2 (ANOVA) design, nc= 7 obs per cell, n=7 x 4 = 28 obs total Means Cell mean

  39. Terminology – cell means, marginal means

  40. Difference in marginal sex means (Male – Female) 1327.29 - 1210.79 = 116.50, 116.50/2 = 58.25 Difference in marginal dementia means (Yes – No) 1261.43 - 1276.64 = -15.21, -15.21/2 = -7.61 Difference in cell mean differences (1321.14 - 1333.43) – (1201.71 - 1219.86) = 5.86 (1321.14 - 1201.71) – (1333.43 - 1219.86) = 5.86 note: 5.86/(2x2) = 1.46 * balanced = same sample size (nc) in every cell

  41. Brain weight via ANOVA -Effect coding (-1,1) MODEL: brain wt = sex dementia sex*dementia Class Levels Values sex 2 -1 1 dementia 2 -1 1 28 observations Source DF Sum of Squares Mean Square F Value Pr > F Model 3 96686 32228.70 451.05 <.0001 Error 24 1715 71.45 = SD2e C Total 27 98402 R-Square Coeff Var Root MSE Mean brain wt 0.9826 0.666092 8.453=SDe 1269.04 Source DF Type III SS Mean Square F Value Pr > F=p value sex 1 95005.75 95005.75 1329.64 <.0001 dementia 1 1620.32 1620.32 22.68 <.0001 sex*dementia 1 60.04 60.04 0.84 0.3685

  42. brain weight - via regression –effect coding Sum of Mean Source DF Squares Square F Value Pr > F Model 3 96686 32228.70 451.05 <.0001 Error 24 1714.9 71.45 Corrected Total 27 98401 R-Square=0.9826 Root MSE=8.453 Mean =1269.04 Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept 1 1269.03571 1.59746 794.41 <.0001 sex 1 58.25000 1.59746 36.46 <.0001 dementia 1 -7.60714 1.59746 -4.76 <.0001 sexdem 1 1.46429 1.59746 0.92 0.3685 Brain wt=1269 +58.3 sex-7.6 dementia + 1.46 sex dementia

  43. Balanced designs and Effect coding Type of person variable: dementia gender dementia*gender no dementia-Female -1 -1 1 dementia-Female 1 -1 -1 no dementia-Male -1 1 -1 dementia-Male 1 1 1 total 0 0 0 Correlations among X1=dementia, X2=gender, X3= dementia*gender Effect coding used with balanced data creates orthogonality Dementia Gender Dementia*gender Dementia 1.0 0.0 0.0 Gender 0.0 1.0 0.0 Dementia*gender 0.0 0.0 1.0

  44. Relation between sum of squares (SS) and regression coefficients, SS=nb2 Factor regr coefficient (b) nb2 =Sum squares (n=28) Dementia 7.607 28(7.607)2 = 1620.32 Gender -58.25 28 (58.25)2 = 95005.75 Dementia*Gender 1.4643 28(1.46423)2 = 60.036 The SS are functions of the squared regression coefficient & n. Dementia, Gender and the Dementia x Gender interaction are orthogonal. The statistical significance of each factor does not depend on whether the other factors are in the model. Makes evaluating each factor easy. Orthogonality holds if : 1. Effect coding is used in the regression 2. The design is balanced

  45. ANOVA tables as a compact regression In general, if factor A has “a” levels (and “a” means), in a regression it must be represented by a-1 dummy or effect coded variables with a-1 corresponding regression coefficients. In the ANOVA table for factor A, the sum of squares for A (SSa), is made out of the sum of squares of the a-1 regression coefficients. DF=a-1.

  46. Ex: a=4, a-1=3, three dummy vars SSa = constant (b1 + b2 + b3)2 So, if factor A is NOT significant in the ANOVA table, we can conclude that β1=β2=… βa-1=0 without looking at each one individually, a major simplification. If factor B has “b” levels, there are a x b possible combinations (cells) and (a-1) + (b-1) + (a-1)(b-1)= ab-1 dummy (or effect coded) variables/ regression coefficients for A, B and the A x B interaction respectively. There are ab combinations of A and B. The squared effects of A, B and AxB are represented in a “condensed” form in the ANOVA table.

  47. ANOVA table – summarizes ab-1 effects in three lines Factor df Sum Squares (SS) Mean square=SS/df A a-1 SSa SSa/(a-1) B b-1 SSb SSb/(b-1) AB (a-1)(b-1) SSab SSab/(a-1)(b-1)

  48. When is the ANOVA table useful? Dependent Variable: depression score Source DF SS Mean Square F Value overall p value Model 199 3387.41 17.02 4.42 <.0001 Error 400 1540.17 3.85 Corrected Total 599 4927.58 root MSE=1.962, R2=0.687 Source DF SS Mean Square F Value p value gender 1 778.084 778.084 202.08 <.0001 race 3 229.689 76.563 19.88 <.0001 educ 4 104.838 26.209 6.81 <.0001 occ 4 1531.371 382.843 99.43 <.0001 gender*race 3 1.879 0.626 0.16 0.9215 gender*educ 4 3.575 0.894 0.23 0.9203 gender*occ 4 8.907 2.227 0.58 0.6785 race*educ 12 69.064 5.755 1.49 0.1230 race*occ 12 62.825 5.235 1.36 0.1826 educ*occ 16 60.568 3.786 0.98 0.4743 gender*race*educ 12 77.742 6.479 1.68 0.0682 gender*race*occ 12 59.705 4.975 1.29 0.2202 gender*educ*occ 16 100.920 6.308 1.64 0.0565 race*educ*occ 48 206.880 4.310 1.12 0.2792 gender*race*educ*occ 48 91.368 1.903 0.49 0.9982

  49. 8 graphs of 200 depression means. Y=depr, X=occ (occupation), X=educ. separate graph for each gender & race Males Females W W B B H H A A

  50. One of the 8 graphs Note parallelism implying no interaction

More Related