1 / 20

Analysis of Differences

Concept Map For Statistics as taught in IS271 (a work in progress). Correlation: Pearson. One Predictor. Regression. Analysis of Relationships. Multiple Predictors. Multiple Regression. Interval Data. Independent Samples t-test. Independent Groups. Between Two Groups.

Download Presentation

Analysis of Differences

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concept Map For Statistics as taught in IS271 (a work in progress) Correlation: Pearson One Predictor Regression Analysis of Relationships Multiple Predictors Multiple Regression Interval Data Independent Samples t-test Independent Groups Between Two Groups Repeated Measures t-test Dependent Groups Analysis of Differences Independent Samples ANOVA Independent Groups Type of Data Repeated Measures ANOVA Between Multiple Groups Dependent Groups Correlation: Spearman Nominal / Ordinal Data Ordinal Regression CHI Square Frequency Some kinds of Regression Rashmi Sinha

  2. Analysis of Variance or F test ANOVA is a technique for using differences between sample means to draw inferences about the presence or absence of differences between populations means. • The logic • Calculations in SPSS • Magnitude of effect: eta squared, omega squared

  3. Assumptions of ANOVA • Assume: • Observations normally distributed within each population • Population variances are equal • Homogeneity of variance or homoscedasticity • Observations are independent

  4. Assumptions--cont. • Analysis of variance is generally robust to first two • A robust test is one that is not greatly affected by violations of assumptions.

  5. Logic of Analysis of Variance • Null hypothesis (Ho): Population means from different conditions are equal • m1 = m2 = m3 = m4 • Alternative hypothesis: H1 • Not all population means equal.

  6. Lets visualize total amount of variance in an experiment Total Variance = Mean Square Total Between Group Differences (Mean Square Group) Error Variance (Individual Differences + Random Variance) Mean Square Error F ratio is a proportion of the MS group/MS Error. The larger the group differences, the bigger the F The larger the error variance, the smaller the F

  7. Logic--cont. • Create a measure of variability among group means • MSgroup • Create a measure of variability within groups • MSerror

  8. Logic--cont. • Form ratio of MSgroup /MSerror • Ratio approximately 1 if null true • Ratio significantly larger than 1 if null false • “approximately 1” can actually be as high as 2 or 3, but not much higher

  9. Grand mean = 3.78

  10. Calculations • Start with Sum of Squares (SS) • We need: • SStotal • SSgroups • SSerror • Compute degrees of freedom (df ) • Compute mean squares and F Cont.

  11. Calculations--cont.

  12. Degrees of Freedom (df ) • Number of “observations” free to vary • dftotal = N - 1 • N observations • dfgroups = g - 1 • g means • dferror = g (n - 1) • n observations in each group = n - 1 df • times g groups

  13. Summary Table

  14. When there are more than two groups • Significant F only shows that not all groups are equal • We want to know what groups are different. • Such procedures are designed to control familywise error rate. • Familywise error rate defined • Contrast with per comparison error rate

  15. Multiple Comparisons • The more tests we run the more likely we are to make Type I error. • Good reason to hold down number of tests

  16. Bonferroni t Test • Run t tests between pairs of groups, as usual • Hold down number of t tests • Reject if t exceeds critical value in Bonferroni table • Works by using a more strict level of significance for each comparison Cont.

  17. Bonferroni t--cont. • Critical value of a for each test set at .05/c, where c = number of tests run • Assuming familywise a = .05 • e. g. with 3 tests, each t must be significant at .05/3 = .0167 level. • With computer printout, just make sure calculated probability < .05/c • Necessary table is in the book

  18. Magnitude of Effect • Why you need to compute magnitude of effect indices • Eta squared (h2) • Easy to calculate • Somewhat biased on the high side • Formula • See slide #33 • Percent of variation in the data that can be attributed to treatment differences Cont.

  19. Magnitude of Effect--cont. • Omega squared (w2) • Much less biased than h2 • Not as intuitive • We adjust both numerator and denominator with MSerror • Formula on next slide

  20. h2 and w2 for Foa, et al. • h2 = .18: 18% of variability in symptoms can be accounted for by treatment • w2 = .12: This is a less biased estimate, and note that it is 33% smaller.

More Related