Introduction to categorical data analysis
Download
1 / 58

INTRODUCTION TO CATEGORICAL DATA ANALYSIS - PowerPoint PPT Presentation


  • 87 Views
  • Uploaded on

INTRODUCTION TO CATEGORICAL DATA ANALYSIS. ODDS RATIO, MEASURE OF ASSOCIATION, TEST OF INDEPENDENCE, LOGISTIC REGRESSION AND POLYTOMIOUS LOGISTIC REGRESSION. DEFINITION. Categorical data are such that measurement scale consists of a set of categories.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' INTRODUCTION TO CATEGORICAL DATA ANALYSIS' - jorden-kennedy


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Introduction to categorical data analysis

INTRODUCTION TO CATEGORICAL DATA ANALYSIS

ODDS RATIO, MEASURE OF ASSOCIATION, TEST OF INDEPENDENCE, LOGISTIC REGRESSION AND POLYTOMIOUS LOGISTIC REGRESSION


Definition
DEFINITION

  • Categorical data are such that measurement scale consists of a set of categories.

  • E.g. marital status: never married, married, divorced, widowed nominal

  • E.g. attitude toward some policy: strongly disapprove, disapprove, approve, strongly approve ordinal

  • SOME VISUALIZATION TECHNIQUES: Jittering, mosaic plots, bar plots etc.

  • Correlation between ordinal or nominal measurements are usually referred to as association.




Odds ratio example
ODDS RATIO - EXAMPLE

  • Chinook Salmon fish captured in 1999

  • VARIABLES:

    -SEX: M or F Nominal

    - MODE OF CAPTURE: Hook&line or Net Nominal

    - RUN: Early run (before July 1) or Late run (After July 1) Ordinal

    - AGE: Interval (Cont. var.)

    - LENGTH (Eye to fork of tail in mm): Interval (Cont. Var.)

  • What is the odds that a captured fish is a female?

    Consider Success = Female (Because they are heavier )


Chinook salmon example
CHINOOK SALMON EXAMPLE

For Hook&Line:

For Net:

The odds that a captured fish is female are 77% ((1.77-1)=0.77) higher with hook&line compared to net.


Odds ratio1
ODDS RATIO

  • In general


Interpretation of or
INTERPRETATION OF OR

  • What does OR=1 mean?

Odds of success are equal number under both conditions. e.g. no matter which mode of capturing is used.


Interpretation of or1
INTERPRETATION OF OR

  • OR>1

  • OR<1

Odds of success is higher with condition 1

Odds of success is lower with condition 1


Shape of or
SHAPE OF OR

  • The range of OR is: 0OR

  • ln(OR) has a more symmetric distribution than OR (i.e., more close to normal distribution)

  • OR=1 ln(OR)=0

  • (1)100% Confidence Interval for ln(OR):

    (1)100% Confidence Interval for OR:

Non-symmetric


Chinook salmon example contd
CHINOOK SALMON EXAMPLE (Contd.)

  • The odds that a captured fish is female are about 30 to 140% greater with hook&line than with using a net.



Measure of association for ixjtables
MEASURE OF ASSOCIATION FOR IxJTABLES

  • Pearson 2 in contingency tables:

  • EXAMPLE= Instrument Failure


Pearson 2 in contingency tables
PEARSON 2 IN CONTINGENCY TABLES

  • Question: Are the type of failure and location of failure independent?

    H0: Type and location are independent

    H1: They are not independent

  • We will compare sample frequencies (i.e. observed values) with the expected frequencies under H0.

  • Remember that if events A and B are independent, then P(AB)=P(A)P(B).

  • If type and location are independent, then

  • P(T1 and L1)=P(T1)P(L1)=(97/200)(111/200)


Pearson 2 in contingency tables1
PEARSON 2 IN CONTINGENCY TABLES

  • Cells~Multinomial(n,p1,…,p6)E(Xi)=npi

  • Expected Frequency=E11=n.Prob.=200(97/200)(111/200)=53.84

  • E12=200(97/200)(42/200)=20.37

  • E13=(97*47/200)=22.8

  • E21=(103*111/200)=57.17

  • E22=(103*42/200)=21.63

  • E23=(103*47/200)=24.2


Cramer s v
CRAMER’S V

  • It adjusts the Pearson 2 with n, I and J.

    In the previous example,


Correlation between ordinal variables
CORRELATION BETWEEN ORDINAL VARIABLES

  • Correlation coefficients are used to quantitatively describe the strength and direction of a relationship between two variables.

  • When both variables are at least interval measurements, may report Pearson product moment coefficient of correlation that is also known as the correlation coefficient, and is denoted by ‘r’.

  • Pearson correlation coefficient is only appropriate to describe linear correlation. The appropriateness of using this coefficient could be examined through scatter plots.

  • A statistic that measures the correlation between two ‘rank’ measurements is Spearman’s ρ , a nonparametric analog of Pearson’s r.

  • Spearman’s ρ is appropriate for skewed continuous or ordinal measurements. It can also be used to determine the relationship between one continuous and one ordinal variable.

  • Statistical tests are available to test hypotheses on ρ. Ho: There is no correlation between the two variables (H0: ρ= 0).



Introduction to categorical data analysis

Either of these might be considered a perfect relationship, depending on one’s reasoning about what relationships between variables look like.

  • Why are there multiple measures of association?

  • Statisticians over the years have thought of varying ways of characterizing what a perfect relationship is:

    tau-b = 1, gamma = 1 tau-b <1, gamma = 1


I m so confused
I’m so confused!! depending on one’s reasoning about what relationships between variables look like.


Rule of thumb
Rule of Thumb depending on one’s reasoning about what relationships between variables look like.

  • Gamma tends to overestimate strength but gives an idea of upper boundary.

  • If table is square use tau-b; if rectangular, use tau-c.

  • Pollock (and we agree):

    τ <.1 is weak; .1<τ<.2 is moderate; .2<τ<.3 moderately strong; .3< τ<1 strong.


Measurement of agreement for ixi tables
MEASUREMENT OF AGREEMENT FOR IxI TABLES depending on one’s reasoning about what relationships between variables look like.

Prob. of agreement


Example cohen s kappa or index of inter rater reliability
EXAMPLE (COHEN’S KAPPA or Index of Inter-rater Reliability)

  • Two pathologists examined 118 samples and categorize them into 4 groups. Below is the 2x2 table for their decisions.


Example contd
EXAMPLE (Contd.) Reliability)

The difference between observed agreement that expected under independence is about 50% of the maximum possible difference.


Evaluation of kappa
EVALUATION OF KAPPA Reliability)

  • If the obtained K is less than .70 -- conclude that the inter-rater reliability is not satisfactory.

  • If the obtained K is greater than .70 -- conclude that the inter-rater reliability is satisfactory.

  • Interpretation of kappa, after Landis and Koch (1977)


Probability models for categorical data
PROBABILITY Reliability) MODELS FOR CATEGORICAL DATA

  • Bernoulli/Binomial

  • Multinomial

  • Poisson


Test on proportions and confidence intervals
TEST ON PROPORTIONS AND CONFIDENCE INTERVALS Reliability)

  • You are already familiar with tests for proportions:

  • CI for Y=0

Pearson 2 or Deviance G2 test


Confidence interval for a proportion
CONFIDENCE INTERVAL FOR A PROPORTION Reliability)

  • For large sample size, we can use normal approximation to binomial (np5 and n(1p)5).

  • If np<5 or n(1p)<5, normal approximation is not realistic.


Confidence interval for a proportion1
CONFIDENCE INTERVAL FOR A PROPORTION Reliability)

  • Consider Y=0 in n trials. Then, p=Y/n=0.

  • Normal approximated CI:

    No matter what n is! But, observing 0 success in 1 trial or in 100 trials is different. Note that, np=0<5.


Exact confidence intervals collette 1991 modeling binary data
EXACT CONFIDENCE INTERVALS Reliability)(Collette, 1991, Modeling Binary Data)

  • Lower Limit:

  • Upper Limit:


Exact confidence intervals
EXACT CONFIDENCE INTERVALS Reliability)

  • Going back to Example with Y=0.

  • Let n=5.

  • Y=0  v1=0, v2=2(5+1)=12, v3=2, v4=2(5)=10


Logistic regression
LOGISTIC Reliability)REGRESSION

  • To analyze the relationship between a binary outcome and a set of explanatory variables when Y is binary.

  • Assumptions of linear models do not hold.

  • Assume Yi~Ber(i). Then, E(Yi)= i=P(Yi=1)P(Yi=0)=1-i.

  • Logistic regression is defined as:

log odds is expressed as a function of x’s


Binary logistic regression
Binary Logistic Regression Reliability)

  • Logistic Distribution

  • Transformed, however, the “log odds” are linear.

P (Y=1)

x

ln[p/(1-p)]

x


Interpretation of paramaters
INTERPRETATION OF PARAMATERS Reliability)

  • Consider p=1. Let X*=X+1 (i.e., one unit increase in X). Then, odds ratio is:

  • exp(1): the odds ratio for 1 unit change in X

  • 1: the log-odds ratio for 1 unit change in X



Estimation of parameters
ESTIMATION OF PARAMETERS Reliability)

Yi~Ber(i).

Nonlinear equations in s. No closed form. Need iterative methods in computer!


Model check
MODEL CHECK Reliability)

  • Since errors, i takes only two values in logistic regression, “usual” residuals will not help with model checks. But, there is “deviance in residuals” in this case.


Model check1
MODEL CHECK Reliability)

  • You can plot devi vs i, which is called index plot of deviance residuals to identify outlying residuals. But this plot does not indicate whether these residuals should be treated as outliers.

  • There are also analogues of common methods used for linear regression such as leverage values and influence diagnostics ( Dffits, Cook’s distance)…

  • NOTE: An alternative for predicting binary response is discriminant analysis. However, this approach assumes X’s are jointly distributed as multivariate normal distribution. So, it is more reasonable when X’s are continuous. Otherwise, logistic regression should be preferred.


Binary logistic regression1
Binary Logistic Regression Reliability)

  • A researcher is interested in the likelihood of gun ownership in the US, and what would predict that.

  • He uses the 2002 GSS to test the following research hypotheses:

    • Men are more likely to own guns than women

    • The older persons are, the more likely they are to own guns

    • White people are more likely to own guns than those of other races

    • The more educated persons are, the less likely they are to own guns


Binary logistic regression2
Binary Logistic Regression Reliability)

  • Variables are measured as such:

    Dependent:

    Havegun: no gun = 0, own gun(s) = 1

    Independent:

    • Sex: men = 0, women = 1

    • Age: entered as number of years

    • White: all other races = 0, white =1

    • Education: entered as number of years

      SPSS: Anyalyze  Regression  Binary Logistic

      Enter your variables and for output below, under options, I checked “iteration history”


Binary logistic regression3
Binary Logistic Regression Reliability)

SPSS Output: Some descriptive information first…


Binary logistic regression4
Binary Logistic Regression Reliability)

Maximum likelihood process stops at third iteration and yields an intercept (-.625) for a model with no predictors.

A measure of fit, -2 Log likelihood is generated. The equation producing this:

-2(∑(Yi * ln[P(Yi)] + (1-Yi) ln[1-P(Yi)])

This is simply the relationship between observed values for each case in your data and the model’s prediction for each case. The “negative 2” makes this number distribute as a X2 distribution.

In a perfect model, -2 log likelihood would equal 0. Therefore, lower numbers imply better model fit.

SPSS Output: Some descriptive information first…


Binary logistic regression5
Binary Logistic Regression Reliability)

Originally, the “best guess” for each person in the data set is 0, have no gun!

This is the model for log odds when any other potential variable equals zero (null model). It predicts : P = .651, like above. 1/1+ea or 1/1+.535

Real P = .349

If you added each…


Binary logistic regression6
Binary Logistic Regression Reliability)

Next are iterations for our full model…


Binary logistic regression7
Binary Logistic Regression Reliability)

Goodness-of-fit statistics for new model come next…

Test of new model vs. intercept-only model (the null model), based on difference of -2LL of each. The difference has a X2 distribution. Is new -2LL significantly smaller?

-2(∑(Yi * ln[P(Yi)] + (1-Yi) ln[1-P(Yi)])

The -2LL number is “ungrounded,” but it has a χ2 distribution. Smaller is better. In a perfect model, -2 log likelihood would equal 0.

These are attempts to replicate R2 using information based on -2 log likelihood, (C&S cannot equal 1)

Assessment of new model’s predictions


Binary logistic regression8
Binary Logistic Regression Reliability)

Interpreting Coefficients…

ln[p/(1-p)] = a + b1X1 + b2X2 + b3X3 + b4X4

eb

X1

X2

X3

X4

1

b1

b2

b3

b4

a

Which b’s are significant?

Being male, getting older, and being white have a positive effect on likelihood of owning a gun. On the other hand, education does not affect owning a gun.


Binary logistic regression9
Binary Logistic Regression Reliability)

  • ln[p/(1-p)] = a + b1X1 + …+bkXk, the power to which you need to take e to get:

    P P

    1 – P So… 1 – P = ea +b1X1+…+bkXk

  • Plug in values of x to get the odds ( = p/1-p).

The coefficients can be manipulated as follows:

Odds = p/(1-p) = ea+b1X1+b2X2+b3X3+b4X4 = ea(eb1)X1(eb2)X2(eb3)X3(eb4)X4

Odds = p/(1-p) = ea+.898X1+.008X2+1.249X3-.056X4 = e-1.864(e.898)X1(e.008)X2(e1.249)X3(e-.056)X4


Binary logistic regression10
Binary Logistic Regression Reliability)

The coefficients can be manipulated as follows:

Odds = p/(1-p) = ea+b1X1+b2X2+b3X3+b4X4 = ea(eb1)X1(eb2)X2(eb3)X3(eb4)X4

Odds = p/(1-p) = e-2.246-.780X1+.020X2+1.618X3-.023X4 = e-2.246(e-.780)X1(e.020)X2(e1.618)X3(e-.023)X4

Each coefficient increases the odds by a multiplicative amount, the amount is eb. “Every unit increase in X increases the odds by eb.”

In the example above, eb = Exp(B) in the last column.


Binary logistic regression11
Binary Logistic Regression Reliability)

Each coefficient increases the odds by a multiplicative amount, the amount is eb. “Every unit increase in X increases the odds by eb.”

In the example above, eb = Exp(B) in the last column.

For Sex: e-.780 = .458 … If you subtract 1 from this value, you get the proportion increase (or decrease) in the odds caused by being male, -.542. In percent terms, odds of owning a gun decrease 54.2% for women.

Age: e.020 = 1.020 A year increase in age increases the odds of owning a gun 2%.

White: e1.618 = 5.044 …Being white increases the odd of owning a gun by 404%

Educ: e-.023 = .977 …Not significant


Binary logistic regression12
Binary Logistic Regression Reliability)

Age: e.020 = 1.020 A year increase in age increases the odds of owning a gun 2%.

How would 10 years’ increase in age affect the odds? Recall (eb)X is the equation component for a variable. For 10 years, (1.020)10 = 1.219. The odds jump by 22% for ten years’ increase in age.

Note: You’d have to know the current prediction level for the dependent variable to know if this percent change is actually making a big difference or not!


Binary logistic regression13
Binary Logistic Regression Reliability)

For our problem, P = e-2.246-.780X1+.020X2+1.618X3-.023X4

1 + e-2.246-.780X1+.020X2+1.618X3-.023X4

For, a man, 30, Latino, and 12 years of education, the P equals?

Let’s solve for e-2.246-.780X1+.020X2+1.618X3-.023X4 = e-2.246-.780(0)+.020(30)+1.618(0)-.023(12)

e-2.246 – 0 + .6+ 0 - .276 = e -1.922 = 2.71828-1.922 = .146

Therefore,

P = .146 = .127 The probability that the 30 year-old, Latino with 12

1.146 years of education will own a gun is .127!!! Or you could say there is a 12.7% chance.


Binary logistic regression14
Binary Logistic Regression Reliability)

Inferential statistics are as before:

  • In model fit, if χ2 test is significant, the expanded model (with your variables), improves prediction.

  • This Chi-squared test tells us that as a set, the variables improve classification.


Binary logistic regression15
Binary Logistic Regression Reliability)

Inferential statistics are as before:

  • The significance of the coefficients is determined by a “wald test.” Wald is χ2 with 1 df and equals a two-tailed t2 with p-value exactly the same.


Binary logistic regression16
Binary Logistic Regression Reliability)

So how would I do hypothesis testing? An Example:

  • Significance test for -level = .05

  • Critical X2df=1= 3.84

  • To find if there is a significant slope in the population,

    Ho:  = 0

    Ha:   0

  • Collect Data

  • Calculate Wald, like t (z): t = b – o (1.96 * 1.96 = 3.84)

    s.e.

  • Make decision about the null hypothesis

  • Find P-value

Reject the null for Male, age, and white. Fail to reject the null for education. There is a 24.2% chance that the sample came from a population where the education coefficient equals 0.



Multinomial logistic regression
MULTINOMIAL LOGISTIC REGRESSION Reliability)

  • There are many ways of constructing polytomous regression.

  • Logistic regression with respect to a baseline category (e.g. last category).

    For nominal response:


Multinomial logistic regression1
MULTINOMIAL LOGISTIC REGRESSION Reliability)

2. Adjacent categories logits (for ordinal data):


Multinomial logistic regression2
MULTINOMIAL LOGISTIC REGRESSION Reliability)

3. Cumulative logits for ordinal variables.

4. Continuation-ratio logits for ordinal variables.

5. Proportional odds model for ordinal variables.

(See Agresti!)