1 / 57

Problems in Data Analyses

Problems in Data Analyses. General case in data analysis. Assumptions distortion Missing data. General Assumption of Anova. The error terms are randomly, and normally distributed Populations (for each condition) are Normally Distributed

kaoru
Download Presentation

Problems in Data Analyses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Problems in Data Analyses

  2. General case in data analysis • Assumptions distortion • Missing data

  3. General Assumption of Anova • The error terms are randomly, and normally distributed Populations (for each condition) are Normally Distributed • The variance of different population are homogeneous (Homo-scedasticity) Populations (for each condition) have Equal Variances • Variances and means of different populations are not correlated (independent) • The main effects are additive

  4. CRD ANOVA F-Test Assumptions • Randomness & Normality • Homogeneity of Variance • Independence of Errors • Additive

  5. Randomized Block F Test Assumptions 1. Normality Populations are normally distributed 2. Homogeneity of Variance Populations have equal variances 3. Independence of Errors Independent random samples are drawn 4. The main effects are additive 5. No Interaction Between Blocks & Treatments

  6. Randomly, independently and Normally distribution • The assumption of normality do not affect the validity of the analysis of variance too seriously • There are test for normality, but it is rather point pointless to apply them unless the number of samples we are dealing with is fairly large • Independence implies that there is no relation between the size of the error terms and the experimental grouping to which the belong • It is important to avoid having all plots receiving a given treatment occupying adjacent positions in the field • The best insurance against seriously violating the first assumption of the analysis of variance is to carry out the randomization appropriate to the particular design

  7. Normality • Reason: • ANOVA is anAnalysis of Variance • Analysis of two variances, more specifically, the ratio of two variances • Statistical inference is based on the F distribution which is given by the ratio of two chi-squared distributions • No surprise that each variance in the ANOVA ratio come from a parent normal distribution • Calculations can always be derived no matter what the distribution is. Calculations are algebraic properties separating sums of squares. Normality is only needed for statistical inference.

  8. Diagnosis: Normality • The points on the normality plot must more or less follow a line to claim “normal distributed”. • There are statistic tests to verify it scientifically. • The ANOVA method we learn here is not sensitive to the normality assumption. That is, a mild departure from the normal distribution will not change our conclusions much. Normality plot: normal scores vs. residuals

  9. Normality Tests • Wide variety of tests can be performed to test if the data follows a normal distribution. • Mardia (1980) provides an extensive list for both the uni-variate and multivariate cases and it is categorized into two types: • Properties of normal distribution, more specifically, the first four moments of the normal distribution • Shapiro-Wilk’s W (compares the ratio of the standard deviation to the variance multiplied by a constant to one) • Lilliefors-Kolmogorov-Smirnov Test • Graphical methods based on residual error (Residual Plotts) • Goodness-of-fit tests, • Kolmogorov-Smirnov D • Cramer-von Mises W2 • Anderson-Darling A2

  10. Checking for Normality Reminder: Normality of the RESIDUALS is assumed. The original data are assumed normal also, but each group may have a different mean if Ha is true. Practice is to first fit the model, THEN output the residuals, then test for normality of the residuals. This APPROACH is always correct. TOOLS • Histogram and/or box-plot of all residuals (eij). • Normal probability (Q-Q) plot. • Formal test for normality.

  11. Histogram of Residuals procglmdata=stress; class sand; model resistance = sand / solution; outputout=resid r=r_resis p=p_resis ; title1'Compression resistance in concrete beams as'; title2' a function of percent sand in the mix'; run; proccapabilitydata=resid; histogram r_resis / normal; ppplot r_resis / normalsquare ; run;

  12. Formal Tests of Normality • Kolmogorov-Smirnov test; Anderson-Darling test (both based on the empirical CDF). • Shapiro-Wilk’s test; Ryan-Joiner test (both are correlation based tests applicable for n < 50). • D’Agostino’s test (n>=50). All quite conservative – they fail to reject the null hypothesis of normality more often than they should.

  13. Shapiro-Wilk’s W test e1, e2, …, en represent data ranked from smallest to largest. H0: The population has a normal distribution. HA: The population does not have a normal distribution. Coefficients ai come from a table. If n is even R.R. Reject H0 if W < W0.05 If n is odd. Critical values of Wa come from a table.

  14. Shapiro-Wilk Coefficients

  15. Shapiro-Wilk Coefficients

  16. Shapiro-Wilk W Table

  17. D’Agostino’s Test e1, e2, …, en represent data ranked from smallest to largest. H0: The population has a normal distribution. Ha: The population does not have a normal distribution. Two sided test. Reject H0 if Y0.025 and Y0.975 come from a table of percentiles of the Y statistics

  18. The Consequences of Non-Normality • F-test is very robust against non-normal data, especially in a fixed-effects model • Large sample size will approximate normality by Central Limit Theorem (recommended sample size > 50) • Simulations have shown unequal sample sizes between treatment groups magnify any departure from normality • A large deviation from normality leads to hypothesis test conclusions that are too liberal and a decrease in power and efficiency

  19. Remedial Measures for Non-Normality • Data transformation • Be aware - transformations may lead to a fundamental change in the relationship between the dependent and the independent variable and is not always recommended. • Don’t use the standard F-test. • Modified F-tests • Adjust the degrees of freedom • Rank F-test (capitalizes the F-tests robustness) • Randomization test on the F-ratio • Other non-parametric test if distribution is unknown • Make up our own test using a likelihood ratio if distribution is known

  20. Homogeneity of Variances • Eisenhart (1947) describes the problem of unequal variances as follows • the ANOVA model is based on the proportion of the mean squares of the factors and the residual mean squares • The residual mean square is the unbiased estimator of 2, the variance of a single observation • The between treatment mean squares takes into account not only the differences between observations, 2,just like the residual mean squares, but also the variance between treatments • If there was non-constant variance among treatments, the residual mean square can be replaced with some overall variance,  a2, and a treatment variance,  t2, which is some weighted version of  a2 • The “neatness” of ANOVA is lost

  21. Homogeneity of Variances • The overall F-test is very robust against heterogeneity of variances, especially with fixed effects and equal sample sizes. • Tests for treatment differences like t-tests and contrasts are severely affected, resulting in inferences that may be too liberal or conservative • Unequal variances can have a marked effect on the level of the test, especially if smaller sample sizes are associated with groups having larger variances • Unequal variances will lead to bias conclusion

  22. Way to solve the problem of Heterogeneous variances • The data can be separated into groups such that the variances within each group are homogenous • An advance statistic tests can be used rather than analysis of variance • Transform the data in such a way that data will be homogenous

  23. Tests for Homogeneity of Variances • Bartley’s Test • Levene’s Test Computes a one-way-anova on the absolute value (or sometimes the square) of the residuals, |yij – ŷi| with t-1, N – t degrees of freedom Considered robust to departures of normality, but too conservative • Brown-Forsythe Test A slight modification of Levene’s test, where the median is substituted for the mean (Kuehl (2000) refers to it as the Levene (med) Test) • The Fmax Test (Hartley Test) Proportion of the largest variance of the treatment groups to the smallest and compares it to a critical value table

  24. Bartlett’s Test Bartlett’s Test: Allows unequal replication, but requires normality. If C > c2(t-1),a then apply the correction term More work but better power Reject if C/CF > c2(t-1),a

  25. Levene’s Test More work but powerful result. = sample median of i-th group Let df1 = t -1 df2 = nT - t Reject H0 if Essentially an Anova on the zij

  26. Hartley’s Test A logical extension of the F test for t=2, Requires equal replication, r, among groups. Requires normality . Reject if Fmax > Fa,t,n-1, • Tabachnik and Fidell (2001) use the Fmax ratio more as a rule of thumb rather than using a table of critical values. • Fmax ratio is no greater than 10 • Sample sizes of groups are approximately equal (ratio of smallest to largest is no greater than 4)

  27. Tests for Homogeneity of Variances • More importantly: VARIANCE TESTS ARE ONLY FOR ONE-WAY ANOVA WARNING: Homogeneity of variance testing is only available for un-weighted one-way models.

  28. Tests for Homogeneity of Variances(Randomized Complete Block Design and/or Factorial Design) • In a CRD, the variance of each treatment group is checked for homogeneity • In factorial/RCBD, each cell’s variance should be checked Ho: σij2 = σi’j’2, For all i, j where i ≠ i’, j ≠ j’

  29. Tests for Homogeneity of Variances(Latin-Squares/Split-Plot Design) • If there is only one score per cell, homogeneity of variances needs to be shown for the marginals of each column and each row • Each factor for a latin-square • Whole plots and subplots for split-plot • If there are repetitions, homogeneity is to be shown within each cell like RCBD

  30. Remedial Measures for Heterogeneous Variances • Studies that do not involve repeated measures • If normality is violated, the data transformation necessary to normalize data will usually stabilize variances as well • If variances are still not homogeneous, non-ANOVA tests might be an option

  31. Transformations to Achieve Homo-scedasticity • What can we do if the homo-scedasticity (equal variances) assumption is rejected? • Declare that the Anova model is not an adequate model for the data. Look for alternative models. • Try to “cheat” by forcing the data to be homo-scedastic through a transformation of the response variable Y. (Variance Stabilizing Transformations.)

  32. Independence It is a special case and the most common cause of heterogeneity of variance • Independent observations • No correlation between error terms • No correlation between independent variables and error • Positively correlated data inflates standard error • The estimation of the treatment means are more accurate than the standard error shows.

  33. Independence Tests • If some notion of how the data was collected is understandable, check can be done if there exists any autocorrelation. • The Durbin-Watson statistic looks at the correlation of each value and the value before it • Data must be sorted in correct order for meaningful results • For example, samples collected at the same time would be ordered by time if suspect results could be depent on time

  34. Independence • A positive correlation between means and variances is often encountered when there is a wide range of sample means • Data that often show a relation between variances and means are data based on counts and data consisting of proportion or percentages • Transformation data can frequently solve the problems

  35. Remedial Measures for Dependent Data • First defense against dependent data is proper study design and randomization • Designs could be implemented that takes correlation into account, e.g., crossover design • Look for environmental factors unaccounted for • Add covariates to the model if they are causing correlation, e.g., quantified learning curves • If no underlying factors can be found attributed to the autocorrelation • Use a different model, e.g., random effects model • Transform the independent variables using the correlation coefficient

  36. The Main effects are additive • For each design, there is a mathematical model called a linear additive model. • It means that the value of experimental unit is made up of general mean plus main effects plus an error term • When the effects are not additive, there are multiplicative treatment effect • In the case of multiplication treatment effects, there are again transformation that will change the data to fit the additive model

  37. Data Transformation • There are two ways in which the anova assumptions can be violated: 1. Data may consist of measurement on an ordinal or a nominal scale 2. Data may not satisfy at least one of the four requirements • Two options are available to analyze data: 1. It is recommended to use non-parametric data analysis 2. It is recommended to transform the data before analysis

  38. Square Root Transformation • It is used when we are dealing with counts of rare events • The data tend to follow a Poisson distribution • If there is account less than 10. It is better to add 0.5 to the value

  39. Square Root Transformation Response is positive and continuous. This transformation works when we notice the variance changes as a linear function of the mean. k>0 • Useful for count data (Poisson Distributed). • For small values of Y, use Y+.5. Typical use: Counts of items when counts are between 0 and 10.

  40. Logaritmic Transformation • It is used when the standard deviation of samples are roughly proportional to the means • There is an evidence of multiplicative rather than additive • Data with negative values or zero can not be transformed. It is suggested to add 1 before transformation

  41. Logarithmic Transformation Response is positive and continuous. This transformation tends to work when the variance is a linear function of the square of the mean k>0 • Replace Y by Y+1 if zero occurs. • Useful if effects are multiplicative (later). • UsefulIf there is considerable heterogeneity in the data. Typical use: 1. Growth over time. 2. Concentrations. 3. Counts of times when counts are greater than 10.

  42. Arcus sinus or angular Transformation • It is used when we are dealing with counts expressed as percentages or proportion of the total sample • Such data generally have a binomial distribution • Such data normally show typical characteristics in which the variances are related to the means

  43. ARCSINE SQUARE ROOT Response is a proportion. With proportions, the variance is a linear function of the mean times (1-mean) where the sample mean is the expected proportion. • Y is a proportion (decimal between 0 and 1). • Zero counts should be replaced by 1/4, and • N by N-1/4 before converting to percentages Typical use: 1. Proportion of seeds germinating. 2. Proportion responding.

  44. Reciprocal Transformation Response is positive and continuous. This transformation works when the variance is a linear function of the fourth power of the mean. • Use Y+1 if zero occurs • Useful if the reciprocal of the original • scale has meaning. Typical use: Survival time.

  45. Box/Cox Transformations (advanced) suggested transformation geometric mean of the original data. Exponent, l, is unknown. Hence the model can be viewed as having an additional parameter which must be estimated (choose the value of l that minimizes the residual sum of squares).

  46. General case in data analysis • Assumptions distortion • Missing data

  47. Missing data Observations that intended to be made but did not make. Reason of missing data: • An animal may die • An experimental plot may be flooded out • A worker may be ill and not turn up on the job • A jar of jelly may be dropped on the floor • The recorded data may be lost Since most experiment are designed with at least some degree of balance/symmetry, any missing observations will destroy the balance

  48. Missing data • In the presence of missing data, the research goal remains making inferences that apply to the population targeted by the complete sample - i.e. the goal remains what it was if we had seen the complete data. • However, both making inferences and performing the analysis are now more complex. • Making assumptions in order to draw inferences, and then use an appropriate computational approach for the analysis is required • Consider the causes and pattern of the missing data for making appropriate changes in the planned analysis of the data

  49. Missing data • Avoid adopting computationally simple solutions (such as just analyzing complete data or carrying forward the last observation in a longitudinal study) which generally lead to misleading inferences. • In one factor experiment, the data analysis can be executed with good estimated value, but in the factorial experiment theoretically can not be analyzed • In CRD one factor experiment, if there are missing data, data can be analyzed with different replication numbers • In the RCBD one factor experiment, if 1 – 2 complete block or treatment is missing but there are still 2 blocks complete, data analysis simply can be proceeded

More Related