1 / 41

CATEGORICAL DATA & χ 2

CATEGORICAL DATA & χ 2. A Quick Look Back. Reminder about hypothesis testing: 1) Assume what you believe (H 1 ) is wrong. Construct H 0 and accept it as a default. 2) Show that some event is of sufficiently low probability given H 0 ***. 3) Reject H 0 .

lnatividad
Download Presentation

CATEGORICAL DATA & χ 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CATEGORICAL DATA & χ2 Chapter 6

  2. A Quick Look Back • Reminder about hypothesis testing: 1) Assume what you believe (H1)is wrong. • Construct H0 and accept it as a default. 2) Show that some event is of sufficiently low probability given H0***. 3) Reject H0. *** In order to do this, we need to know the distribution associated with H0, because we use that distribution as the basis for our probability calculation. Chapter 6

  3. z-score • Use when we have acquired some data set, then want to ask questions concerning the probability of certain specific data values (e.g., do certain values seem extreme?). • In this case, the distribution associated with H0 is described by X and S2 because the data points reflect a continuous variable that is normally distributed. Chapter 6

  4. Chi-square (χ2) test • The Chi-square test is a general purpose test for use with discrete variables. • It has a number of uses, including the detection of bizarre outcomes given some a priori probability for binomial situation, and for multinomial situations. Chapter 6

  5. Chi-square (χ2) test • In addition, it allows us to go beyond questions of bizarreness, and move into the question of whether pairs of variables are related. For example: • It does so by mapping the discreet variables unto a continuous distribution assuming H0, the chi-square distribution. Chapter 6

  6. The chi-square distribution • Let’s reconsider a simple binomial problem. Say, we have a batter who hits .300 [i.e., P(Hit)=0.30], and we want to know whether it is abnormal for him to go 6 for 10 (i.e., 6 hits in 10 at bats). • We could do this using the binomial stuff that I did not cover in Chapter 5 (and for which you are not responsible) • But we can also do it with a chi-square test Chapter 6

  7. The way of the chi2 • We can put our values into a contingency table as follows: • Then consider the distribution of the following formula given H0: Chapter 6

  8. The way of the chi2 Chapter 6

  9. The way of the chi2 Chapter 6

  10. The way of the chi2 In-Class Example: • Note that while the observed values are discreet, the derived score is continuous. • If we calculated enough of these derived scores, we could plot a frequency distribution which would be a chi-square distribution with 1 degree of freedom or 2(1). • Given this distribution and appropriate tables, we can then find the probability associated with any particular 2 value. Chapter 6

  11. The way of the chi2 Continuing the Baseball Example: So if the probability of obtaining a 2 of 4.29 or greater is less than , then the observed outcome can be considered bizarre (i.e., the result of something other than a .300 hitter getting lucky). Chapter 6

  12. The way of the chi2 • Just like the t-test, chi2 distribution is based on degrees of freedom • Thus, since our obtained 2 value of 4.29 is greater than 3.84, we can reject H0 and assume that hitting 6 of 10 reflects more than just chance performance. Chapter 6

  13. The way of the chi2 Going a Step Further: • Suppose we complicate the previous example by taking walks and hit by pitches into account. That is, suppose the average batter gets a hit with a probability of 0.28, gets walked with a probability of .08, gets hit by a pitch (HBP) with a probability of .02, and gets out the rest of the time. Chapter 6

  14. The way of the chi2 • Now we ask, can you reject H0 (that this batter is typical of the average batter) given the following outcomes from 50 at bats? 1) Calculate expected values (Np). 2) Calculate 2 obtained. 3) Figure out the appropriate df (C-1). 4) Find 2critical and compare 2 obtained to it. Chapter 6

  15. The way of the chi2 Chapter 6

  16. Two types of chi2 tests • So far, all the tests have been to assess whether some observation or set of observations seems out-of-line with some expected distribution. This is also known as the goodness-of-fit chi-square test • However, the logic of the chi-square test can be extended to examine the issue of whether two variables are independent (i.e., not systematically related) or dependent (i.e., systematically related). Chapter 6

  17. χ2 test for independence • Consider the following data set again: • Are the variables of gender and opinion concerning the legalization of marijuana independent? Chapter 6

  18. χ2 test for independence Chapter 6

  19. χ2 test for independence • If these two variables are independent, then by the multiplicative law, we expect that: Chapter 6

  20. χ2 test for independence • If we do this for all four cells, we get: Chapter 6

  21. χ2 test for independence • Are the observed values different enough from the expected values to reject the notion that the differences are due to chance variation? Chapter 6

  22. χ2 test for independence • The df associated with 2 variable contingency tables can be calculated using the formula: • where C is the number of columns and R is the number of rows. Chapter 6

  23. χ2 test for independence • Thus, to finish our previous example, the 2 critical with alpha equal .05 and 1 df equals 3.84. Since our 2 is not bigger than that (i.e., 3.6) we cannot reject H0. Chapter 6

  24. Assumptions of χ2 Independence of observations: • Chi-square analyses are only valid when the actual observations within the cells are independent. • This independence of observations is different from the issue of whether the variables are independent, that is what the chi-square is testing. Chapter 6

  25. Assumptions of χ2 Independence of observations: • You know your observations are not independent when the grand total is larger than the number of subjects. • Example: The activity level of 5 rats was tested over 4 days, producing these values: Chapter 6

  26. Assumptions of χ2 Normality: • Use of the chi-square distribution for finding critical values assumes that the expected values (i.e., Np) are normally distributed. • This assumption breaks down when the expected values are small (specifically, the distribution of Np becomes more and more positively skewed as Np gets small). Chapter 6

  27. Assumptions of χ2 Normality: • Thus, one should be cautious using the chi-square test when the expected values are small. • How small? This is debatable but if expected values are as low as 5, you should be worried. Chapter 6

  28. Assumptions of χ2 Inclusion of Non-Occurrences: • The chi-square test assumes that all outcomes (occurrences and non-occurrences) are considered in the contingency table. • As an example of a failure to include a non-occurrence, see page 160 of the text. Chapter 6

  29. A tale of tails • We only reject H0 when values of 2 are larger than 2 obtained. • This suggests that the 2 test is always one-tailed and, in terms of the rejection region, it is. • In a different sense, however, the test is actually multiple tailed. Chapter 6

  30. A tale of tails • Reconsider the following “marking scheme” example: • If we do not specify how we expect the results to fall out then any outcome with a high enough 2 obtained can be used to reject H0. • However, if we specify our outcome, we are allowed to increase our alpha - in the example we can increase alpha to 0.30 if we specified the exact ordering (in advance) that was observed. Chapter 6

  31. Measures of Association • The chi-square test only tells us whether two variables are independent, it does not say anything about the magnitude of the dependency if one is found to exist. • Stealing from the book, consider the following two cases, both of which produce a significant 2 obtained, but which imply different strengths of relation: Chapter 6

  32. Measures of Association Chapter 6

  33. Measures of Association • There are a number of ways to quantify the strength of a relation (see sections in the text on the contingency coefficient, Phi, & Odds Ratios), but the two most relevant to psychologists are Cramer’s Phi and Cohen’s Kappa. Chapter 6

  34. Measures of Association • Cramer’s Phi (φc) can be used with any contingency table and is calculated as: • Values of range from 0 to 1. The values the tables on the previous page are 0.12 and 0.60 respectively, indicating a much stronger relation in the second example. Chapter 6

  35. Measures of Association • Often, in psychology, we will ask some “judge” to categorize things into specific categories. • For example, imagine a beer brewing competition where we asked a judge to categorize beers as Yucky, OK, or Yummy. • Obviously, we are eventually interested in knowing something about the beers after they are categorized. Chapter 6

  36. Measures of Association • However, one issue that arises is the judges abilities to tell the difference between the beers. • One way around this is to get two judges and show that a given beer is reliably rated across the judges (i.e., that both judges tend to categorize things in a similar way). Chapter 6

  37. Measures of Association • Such a finding would suggest that the judges are sensitive to some underlying quality of the beers as opposed to just guessing. Chapter 6

  38. Measures of Association • Note that if you just looked at the proportion of decisions that me and Judge 2 agreed on, it looks like we are doing OK: P(Agree)=21/30 = 0.70 or 70% Chapter 6

  39. Measures of Association • There is a problem here, however, because both judges are biased to judge a beer as OK such that even if they were guessing, the agreement would seem high because both would guess OK on a lot of trials and would therefore agree a lot. Chapter 6

  40. Measures of Association • Such a finding would suggest that the judges are sensitive to some underlying quality of the beers as opposed to just guessing. Chapter 6

  41. Chapter 6

More Related