1 / 60

Lecture 7 Two-Way Tables Slides available from Statistics & SPSS page of gpryce

Lecture 7 Two-Way Tables Slides available from Statistics & SPSS page of www.gpryce.com. Social Science Statistics Module I Gwilym Pryce. Notices:. Register. Aims and Objectives:. Aim: This session introduces methods of examining relationships between categorical variables Objectives:

kendall
Download Presentation

Lecture 7 Two-Way Tables Slides available from Statistics & SPSS page of gpryce

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7Two-Way TablesSlides available from Statistics & SPSS page of www.gpryce.com Social Science Statistics Module I Gwilym Pryce

  2. Notices: • Register

  3. Aims and Objectives: • Aim: • This session introduces methods of examining relationships between categorical variables • Objectives: • By the end of this session you should be able to: • Understand how to examine relationships between categorical variables using: • 2 way tables • Chi square test for independence.

  4. Plan: • 1. Independent events • 2. Contingent events • 3. Chi square test for independence • 4. Further Study

  5. 1. Probability of two Independent events occurring • If knowing that one event occurs does not affect the outcome of another event, we say those two outcomes are independent. • And if A and B are independent, and we know the probability of each of them occurring, we can calculate the probability of them both occurring

  6. Example: You have a two sided die and a coin, find Pr(1 and H). • Answer: ½ x ½ = ¼ • Rule: P(A  B) = P(A) x P(B)

  7. e.g. You have one fair coin which you toss twice: what’s the probability of getting two heads? • Suppose: • A = 1st toss is a head • B = 2nd toss is a head • what is the probability of A  B? • Answer: A and B are independent and are not disjoint (i.e. not mutually exclusive). P(A) = 0.5 and P(B) = 0.5. P (A  B) = 0.5 x 0.5 = 0.25.

  8. 2. Probability of two contingent events occurring • If knowing that one event occurs does change the probability that the other occurs, then two events are not independent and are said to be contingent upon each other • If events are contingent then we can say that there is some kind of relationship between them • So testing for contingency is one way of testing for a relationship

  9. Example of contingent events: • There is a 70% chance that a child will go to university if his/her parents are middle class, but only a 10% chance if his/her parents are working class. Given that there is a 60% chance of a child’s parents being working class: • What are the chances that a child will be born working class and go to University? • What proportion of people at university will be from working working class backgrounds?

  10. A tricky one...

  11. This diagram illustrates graphically how the probability of going to university is contingent upon the social class of your parents.

  12. 6% of all children are both working class and end up going to University

  13. % = as percent of all children

  14. % at Uni from WC parents? • Of all children, only 34% end up at university (6% WC; 28% MC) • i.e. 6 out of every 34 University students are from WC parents: • 6/34 = 17.6% of University students are WC

  15. Probability theory states that: • if x and y are independent, then the probability of events x and y simultaneously occurring is simply equal to the product of the two events occurring: • But, if x and y are not independent, then:

  16. Test for independence • We can use these two rules to test whether events are independent • Does the distribution of observations across possible outcomes resemble • the random distribution we would get if events were independent? • I.e. if we assume independence and calculate the expected number of of cases in each category, do these figures correspond fairly closely to the actual distribution of outcomes found in our data? • Or a distribution of outcomes more akin to contingency • i.e. one event contingent on the other

  17. Example 1: Is there a relationship between social class and education? We might test this by looking at categories in our data of WC, MC, University, no University. Suppose we have 300 observations distributed as follows: Given this distribution, would you say these two variables are independent?

  18. To do the test for independence we need to compare expected with observed. • But how do we calculate ei, the expected number of observations in category i? • ei = number of cases expected in cell iassuming that the two categorical variables are independent • ei is calculated simply as: the probability of an observation falling into category i under the independence assumption, multiplied by the total number of observations. • I.e. No contingency

  19. Prob(UNIY  WC) = Prob(UNIY)Prob(WC) • so the expected number of cases for each of the four mutually exclusive categories are as follows: So, if UNIY orUNIN and WC or MCare independent (i.e. assuming H0) then:

  20. But how do we work out: Prob(UNIY) and Prob(WC) which are needed to calcluate Prob(UNIY WC): Prob(UNIY WC) = Prob(UNIY)Prob(WC) • Answer: we assume independence and so estimate them from the data by simply dividing the total observations by the total number in the given category: E.g. Prob(UNIY) = Total no. cases UNIY All observations = (18 + 84) / 300 = 0.34 • Prob(WC) is calculated the same way: E.g. Prob(WC) = Total no. cases WC All observations = (18 + 162)/300 = 0.6 • Prob(UNIY WC) = .34 x .6 x 300 = 61.2

  21. Expected count in each category:

  22. We have the actual count(I.e. from our data set):

  23. And the expected count: (I.e. the numbers we’d expect if we assume class & education to be independent of each other):

  24. What does this table tell you?

  25. It tells you that if class and education were indeed independent of each other • I.e. the outcome of one does not affect the chances of outcome of the other • Then you’d expect a lot more working class people in the data to have gone to university than actually recorded (61 people, rather than 18) • Conversely, you’d expect far fewer middle class people to have gone to university (half the number actually recorded – 41 people rather than 80).

  26. But remember, all this is based on a sample, not the entire population… • Q/ Is this discrepancy due to sampling variation alone or does it indicate that we must reject the assumption of independence? • To answer this within the standardised hypothesis testing framework we need to know the chances of false rejection

  27. 3. Chi-square test for independence (non-parametric -- I.e. no presuppositions re distribution of variables; sample size not relevant) (1) H0: expected = actual x & y are independent • I.e. Prob(x) is not affected by whether or not y occurs; H1: expected  actual there is some relationship • I.e. Prob(x) is affected by y occurring. (2) a = 0.05 k = no. of categories ei = expected (given H0) no. of sample observations in the ith category oi = actual no. of sample observations in the ith category d = no. of parameters that have to be estimated from the sample data. r = no. of rows in table c = no. of colums “ “

  28. Chi-square distribution changes shape for different df:

  29. (3) Reject H0 iff P < a (4) Calculate P: • P = Prob(c2 > c2c) • N.B. Chi-square tests are always an upper tail test • c2 Tables: are usually set up like a t-table with df down the side, and the probabilities listed along the top row, with values of c2c actually in the body of the table. So look up c2c in the body of the table for the relevant df and then find the upper tail probability that heads that column. • SPSS: - CDF.CHISQ(c2c,df) calculates Prob(c2 < c2c), so use the following syntax: • COMPUTE chi_prob = 1 - CDF.CHISQ(c2c,df). • EXECUTE.

  30. Do a chi-square test on the following table:

  31. H0: expected = actual  class and Higher Education are independent H1: expected  actual there is some relationship between class and Higher Education

  32. (2) State the formula & calc c2 : c2 = ( (18 - 61.2)2 / 61.2 + (84 - 40.8)2/ 40.8 + (162-118.8)2 / 118.8 + (36 - 79.2)2/ 79.2 )

  33. c2 = ((18 - 61.2)2 / 61.2 + (84 - 40.8)2/ 40.8 + (162-118.8)2 /118.8 + (36 - 79.2)2/ 79.2 ) = 30.49 + 45.74 + 15.71 + 23.56 = 115.51 df = (r-1)(c-1) = 1 Sig = P(c2 > 115.51) = 0

  34. (3) Reject H0 iff P < a (4) Calculate P: COMPUTE chi_prob = 1 - CDF.CHISQ(115.51,1). EXECUTE. Sig = P(c2 > 115.51) = 0  Reject H0

  35. Caveat: • As with the 2 proportions tests, the chi-square test is, • “an approximate method that becomes more accurate as the counts in the cells of the table get larger” (Moore, Basic Practice of Statistics, 2000, p. 485) • Cell counts required for the Chi-square test: • “You can safely use the chi-square test with critical values from the chi-square distribution when no more than 20% of the expected counts are less than 5 and all individual expected counts are 1 or greater. In particular, all four expected counts in a 2x2 table should be 5 or greater” (Moore, Basic Practice of Statistics, 2000, p. 485)

  36. Example 2: Is there a relationship between whether a borrower is a first time buyer and whether they live in Durham or Cumberland? • Only real problem is how do we calculate eithe expected number of observations in category i? • (I.e. number of cases expected in iassuming that the variables are independent) • the formula for ei is the probability of an observation falling into category i multiplied by the total number of observations.

  37. As noted earlier: • Probability theory states that: • if x and y are independent, then the probability of events x and y simultaneously occurring is simply equal to the product of the two events occurring: • But, if x and y are not independent, then:

  38. So, if FTBY or N and CountyD or Care independent (i.e. assuming H0) then: Prob(FTBY CountyD) = Prob(FTBY)Prob(CountyD) • so the expected number of cases for each of the four mutually exclusive categories are as follows:

  39. Prob(FTBN) = Total no. cases FTBN All observations

  40. This gives us the expected count: To obtain this table in SPSS, go to Analyse, Descriptive Statistics, Crosstabs, Cells, and choose expected count rather than observed

  41. What does this table tell you? • Does it suggest that the probability of being an FTB independent of location? • Or does it suggest that the two factors are contingent on each other in some way? • Can it tell you anything about the direction of causation? • What about sampling variation?

  42. Summary of Hypothesis test: • (1) H0: FTB and County are independent H1: there is some relationship • (2) a = 0.05 • (3) Reject H0 iff P < a • (4) Calculate P: • P = Prob(c2 > c2c) = 0.29557Do not reject H0 I.e. if we were to reject H0, there would be a 1 in 3 chance of us rejecting it incorrectly, and so we cannot do so. In other words, FTB and County are independent.

  43. Contingency Tables in SPSS:

More Related