1 / 71

A Review of Widely-Used Statistical Methods

A Review of Widely-Used Statistical Methods. REVIEW OF FUNDAMENTALS When testing hypotheses, all statistical methods will always be testing the null . Null Hypothesis? No difference/no relationship If we do not reject the null, conclusion? Found no difference/no relationship

gage
Download Presentation

A Review of Widely-Used Statistical Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Review of Widely-Used Statistical Methods

  2. REVIEW OF FUNDAMENTALS • When testing hypotheses,all statistical methods willalways be testing the null. • Null Hypothesis? • No difference/no relationship • If we do not reject the null, conclusion? • Found no difference/no relationship • If we do decide to reject the null, conclusion? • A significant relationship/difference is found and reported • The observed relationship/difference is too large to be attributable to chance/sampling error.

  3. How do we decide to reject/not reject the null? • Statistical tests of significance always test the null and always reporta? • a (sig. level)—probability of erroneously rejecting a true null based on sample data. • arepresents the odds of being wrong if we decide to reject the null • the probability that null is in fact true and that any apparent relationship/difference is a result of chance/sampling error and, thus • the odds of being wrong if we report a significant relationship/difference. • Rule of thumb for deciding to reject/not reject the null?

  4. STATITICAL DATA ANALYSIS COMMON TYPES OF ANALYSIS? • Examine Strength and Direction of Relationships • Bivariate (e.g., Pearson Correlation—r) • Between one variable and another: Y = a + b1 x1 • Multivariate (e.g., Multiple Regression Analysis) • Between one dep. var. and an independent variable, while holding all other independent variables constant: Y = a + b1 x1 + b2 x2 + b3 x3 + … + bk xk • Compare Groups • Between Proportions (e.g., Chi Square Test—2) H0: P1 = P2 = P3 = … = Pk • Between Means (e.g., Analysis of Variance)H0: µ1 = µ2 = µ3 = …= µk Let’s first review some fundamentals.

  5. Remember: Level of measurement determines choice of statistical method. Statistical Techniques and Levels of Measurement: INDEPENDENT NOMINAL/CATEGORICAL METRIC (ORDERED METRIC or HIGHER) * Chi-Square * Discriminant Analysis * Fisher’s Exact Prob. * Logit Regression * T-Test * Correlation (and Covariance) Analysis * Analysis of Variance * Regression Analysis NOMINAL DEPENDENT METRIC

  6. Correlation and Covariance: Measures of AssociationBetween Two Variables Often we are interested in the strength and nature of the relationship between two variables. Two indices that measure the linear relationship between twocontinuous/metric variables are: Covariance Correlation Coefficient (Pearson Correlation)

  7. Covariance Covariance is a measure of the linear association betweentwo metric variables (i.e., ordered metric, interval, or ratio variables). Covariance (for a sample) is computed as follows: for samples Positive values indicate a positive relationship. Negative values indicate a negative (inverse) relationship.

  8. Covariance(sxy ) of Two Variables • Example: Golfing Study A golf enthusiast is interested in investigating the relationship, if any, between golfers’ driving distance (x) and their 18-hole score (y). He uses the following sample data (i.e., data from n = 6 golfers) to examine the issue: y = Golfer’s Average 18-Hole Score x =Average Driving Distance (yards.) 69 71 70 70 71 69 277.6 259.5 269.1 267.0 255.6 272.9

  9. Covariance (sxy ) of two variables • Example: Golfing Study x y 69 71 70 70 71 69 -1.0 1.0 0 0 1.0 -1.0 277.6 259.5 269.1 267.0 255.6 272.9 10.65 -7.45 2.15 0.05 -11.35 5.95 -10.65 -7.45 0 0 -11.35 -5.95 Average Total 267.0 70.0 -35.40 Std. Dev. 8.2192 .8944 n = 6

  10. Covariance • Example: Golfing Study Covariance: • What can we say about the relationship between the two variables? • The relationship is negative/inverse. • That is, the longer a golfer’s driving distance is, the lower (better) his/her score is likely to be. • How strong is the relationship between x and y? • Hard to tell; there is no standard metric to judge it by! • Values of covariance depend on units of measurement for x and y. • WHAT DOES THIS MEAN?

  11. Covariance • It means: • If driving distance (x) were measured in feet, rather than yards, even though it is the same relationship (using the same data),the covariance sxy would have been much larger. WHY? • Because x-values would be much larger, and thus values will be much larger which, in turn, will makemuch larger. • SOLUTION: Correlation Coefficient comes to the rescue! • Correlation Coefficient (r) is a standard measure/metric for judging strength of linear relationship that, unlike covariance,is not affected by the units of measurement for x and y. • This is why correlation coefficient (r) is much more widely used that covariance.

  12. Correlation Coefficient • Correlation Coefficient rxy(Pearson/simple correlation) is a measure of linear association between two variables. • It may or may not represent causation. The correlation coefficientrxy(for sample data) is computed as follows: sxy = Covariance of x & y sx = Std. Dev. of x sy = Std Dev. of y for samples

  13. Correlation Coefficient = r • Francis Galton • (English researcher, inventor of fingerprinting, and cousin of Charles Darwin) • In1888, plotted lengths of forearms and head sizes to see to what degree one could be predicted by the other. • Stumbled upon the mathematical properties of correlation plots (e.g., y intercept, size of slope, etc.). • RESULT:An objective measure of how two variables are“co-related“--CORRELATION COEFFICIENT (Pearson Correlation), r. • Assesses the strength of a relationship based strictly on empirical data, and independent of human judgment or opinion

  14. Correlation Coefficient (Pearson Correlation) = r What do you use it for? Karl Pearson, a Galton Student & the Founder of Modern Statistics To examine: a. Whether a relationship exists between two metric variables • e.g., income and education, or workload and job satisfaction and b. What the nature and strength of that relationship may be. Range of Values for r?

  15. Correlation Coefficient (Pearson Correlation) rxy -1 < r < +1. • r-values closer to -1 or +1 indicate stronger linear relationships. • r-values closer to zero indicate a weaker relationship. • NOTE: Once rxy is calculated, we need to see whether it isstatistically significant (if using sample data). • Null Hypothesis when using r? • H0: r = 0 There is no relationship between the two variables.

  16. Correlation Coefficient (Pearson Correlation) rxy • Example: Golfing Study A golf enthusiast is interested in investigating the relationship, if any, between golfers’ driving distance (x) and their 18-hole score (y). He uses the following sample data (i.e., data from n = 6 golfers) to examine the issue: y =Average 18-Hole Score x =Average Driving Distance (yards.) 69 71 70 70 71 69 277.6 259.5 269.1 267.0 255.6 272.9

  17. Correlation Coefficient (Pearson Correlation) rxy • Example: Golfing Study x y 69 71 70 70 71 69 -1.0 1.0 0 0 1.0 -1.0 277.6 259.5 269.1 267.0 255.6 272.9 10.65 -7.45 2.15 0.05 -11.35 5.95 -10.65 -7.45 0 0 -11.35 -5.95 Average Total 267.0 70.0 -35.40 Std. Dev. 8.2192 .8944

  18. Correlation Coefficient (Pearson Correlation) rxy • Example: Golfing Study We had calculated sample Covariance sxyto be: Correlation Coefficient (Pearson Correlation) rxy Conclusion? Not only is the relationship negative, but also extremely strong!

  19. Correlation Coefficient (Pearson Correlation): • To understand the practical meaning of r, we can square it. • What would r2 mean/represent? • e.g., r = 0.96 r2 = 92% r2Represents the proportion (%) of the total/combined variationin both x and y that is accounted for by the joint variation (covariation) of x and y together (x with y and y with x) • r2 always represents a % • Why do we show more interest in r, rather than r2?

  20. Correlation Coefficient: Computation r2 = (Covariation of X and Y together)/ (All of variation of X & Y combined) Blood Age Pressure _ _ _ _ _ _ X Y X – X Y – Y (X – X) (Y – Y) (X – X)2 (Y – Y)2 4 12 -3 -4 12 9 16 6 19 -1 3 -3 1 9 9 14 2 -2 -4 4 4 . . . . . . . . . . . . . . _ _ _ _ _ _ X=7 Y=16∑(X – X) (Y – Y) ∑ (X – X)2 ∑(Y – Y)2 NOTE: Once r is calculated, we need to see if it is statistically significant (if sample data). That is, we need to test H0: r = 0

  21. Correlation Coefficient? Suppose the correlation between X (say, Students’ GMAT Scores) and Y (their 1st year GPA in MBA program) is r = +0.48 andis statistically significant.How would we interpret this? GMAT score and 1st year GPA are positively related so that as values of one variable increase, values of the other also tend to increase, and R2 = (0.48)2 = 23% of variations/differences in students’ GPAs are explained by (or can be attributed to) variations/ differences in their GMAT scores. Lets now practice on SPSS Menu Bar: Analyze, Correlate, Bivariate, Pearson EXAMPLE: Using data inSPSS File Salary.savwe wish to see if beginning salary is related to seniority, age, work experience, and education

  22. STATITICAL DATA ANALYSIS COMMON TYPES OF ANALYSIS: • Examine Strength and Direction of Relationships • Bivariate (e.g., Pearson Correlation—r) • Between one variable and another: Y = a + b1 x1 • Multivariate (e.g., Multiple Regression Analysis) • Between one dep. var. and an independent variable, while holding all other independent variables constant: Y = a + b1 x1 + b2 x2 + b3 x3 + … + bk xk • Compare Groups • Between Proportions (e.g., Chi Square Test—2) H0: P1 = P2 = P3 = … = Pk • Between Means (e.g., Analysis of Variance)H0: µ1 = µ2 = µ3 = …= µk

  23. STATITICAL DATA ANALYSIS Chi-Square Test of Independence? • Developed by Karl Pearson in 1900. • Is used to compare two or more groups regarding a categorical characteristic. • That is, to compare proportions/percentages: • Examines whether proportions of different groups of subjects (e.g., managers vs professionals vs operatives) are equal/ different across two or more categories (e.g., males vs females). • Examines whether or not a relationship exists betweentwo categorical/nominalvariables (e.g., employee status and gender) • A categorical DV and a categorical IV. • EXAMPLE? • Is smoking a function of gender? That is, is there a difference between the percentages of males and females who smoke?

  24. Chi-Square Test of Independence • Research Sample (n=100): IDGenderSmoking Status 1 0 = Male 1 = Smoker 2 1 = Female 0 = Non-Smoker 3 1 1 4 1 0 5 0 0 . . . . . . . . . 100 1 0 • Dependent variable (smoking status) and the independent variable (gender) are both categorical. • Null Hypothesis? H0: There is no difference in the percentages of males and females who smoke/don’t smoke (i.e., Smoking is not a function of gender). QUESTION: Logically, what would be the first thing you would do?

  25. Chi-Square Test of Independence H0: There is no difference in the percentages of males and females who smoke (Smoking is not a function of gender). H1: The two groups are different with respect to the proportions who smoke. TESTING PROCEDURE AND THE INTUITIVE LOGIC: • Construct a contingency Table: Cross-tabulate the observations and compute Observed(actual) Frequencies (Oij ): Male FemaleTOTAL Smoker O11 = 15 O12 = 25 40 Nonsmoker O21 = 5 O22 = 55 60 TOTAL 20 80 n = 100

  26. Chi-Square Test of Independence • Next, ask yourself: What numbers would you expect to find in the table if you were certain that there was absolutely no difference between the percentages of males and females who smoked (i.e., if you expected the Null to be true)? That is, compute the Expected Frequencies (Eij ). • Hint: Male FemaleTOTAL Smoker O11 = 15 O12 = 25 40 Nonsmoker O21 = 5 O22 = 55 60 TOTAL 20 80 n = 100 What % of all the subjects are smokers/non-smokers?

  27. Chi-Square Test of Independence • If there were absolutely no differences between the two groups with regard to smoking,you would expect 40% of individuals in each group to be smokers (and 60% non-smokers). • Compute and place the Expected Frequencies (Eij )in the appropriate cells: Male FemaleTOTAL Smoker O11 = 15 O12 = 25 40 Nonsmoker O21 = 5 O22 = 55 60 TOTAL 20 80 n = 100 NOW WHAT? What is the next logical step? E11 = 8 E12 = 32 E21 = 12 E22 = 48

  28. Chi-Square Test of Independence Compare the Observed and Expected frequencies—i.e., examine the (Oij – Eij) discrepancies. Male FemaleTOTAL Smoker O11 = 15 O12 = 25 40 Nonsmoker O21 = 5 O22 = 55 60 TOTAL 20 80 n = 100 E11 = 8 E12 = 32 E21 = 12 E22 = 48 QUESTION: What can we infer if the observed/actual frequencies happen to be reasonably close (or identical) to the expected frequencies?

  29. Chi-Square Test of Independence So, the key to answering our original question lies in the size of the discrepancies between observed and expected frequencies. If the observed frequencies were reasonably close to the expected frequencies: • Reasonably certain that no difference exists between percentages of males and females who smoke, • Good chance that H0 is true • That is, we would be running a large risk of being wrong if we decide to reject it. On the other hand, the farther apart the observed frequencies happen to be from their corresponding expected frequencies: • The greater the chance that percentages of males and females who smoke would be different, • Good chance that H0 is false and should be rejected • That is, we would run a relatively small risk of being wrong if we decide to reject it. What is, then, the next logical step?

  30. Chi-Square Test of Independence • Compute an Overall Discrepancy Index: One way to quantify the total discrepancy between observed(Oij) and expected(Eij)frequencies is to add up all cell discrepancies--i.e., compute S (Oij– Eij). • Problem? Positive and negative values of (Oij – Eij) RESIDUALS for different cells will cancel out. • Solution? Square each (Oij – Eij) and then sum them up--compute S(Oij – Eij)2. • Any Other Problems? Value of S(Oij – Eij)2 is impacted by sample size (n). • For example, if you double the number of subjects in each cell, even though cell discrepancies remain proportionally the same, the above discrepancy index will be much larger and may lead to a different conclusion. Solution?

  31. Chi-Square Test of Independence • Divide each (Oij – Eij)2 value by its corresponding Eij value before summing them up across all cells • That is, compute an index for average discrepancy per subject. (Oij – Eij)2 • S • Eij (Oij – Eij)2 • c2 = SYou have just developed the formula forc2 Statistic: • Eij c2 can be intuitively viewed as: An index that shows how much the observed frequencies are in agreement with (or apart from) the expected frequencies (for when the null is assumed to be true). So, let’s compute c2 statistic for our example:

  32. Chi-Square Test of Independence Male FemaleTOTAL Smoker O11 = 15 O12 = 25 40 Nonsmoker O21 = 5 O22 = 55 60 TOTAL 20 80 n = 100 E11 = 8 E12 = 32 E21 = 12 E22 = 48 • (15 – 8)2 (25 – 32)2 (5 – 12)2 (55 – 48)2 c2 = + + + = 12.76 • 8 32 12 48

  33. Chi-Square Test of Independence Let’s Review: • Obtaining a small c2value means? • Observed frequencies are in close agreement with what we would expect them to be if there were no differences between our comparison groups. • That is, there is a strong likelihood that no difference exists between the percentages of males and females who smoke. • Hence, we would be running a significant risk of being wrong if we were to reject the null hypothesis. That is, ais expected to be relatively large. • Therefore, we should NOT reject the null. • NOTE: Smaller c2 values result in larger alevels (if n remains the same). • A large c2value means?

  34. Chi-Square Test of Independence • A large c2 value means: • Observed frequencies are far apart from what they ought to be if the null hypothesis were true. • That is, there is a strong likelihood for existence of a difference in the percentages of male and female smokers. • Hence, we would be running a small risk of being wrong if wewere to reject the null hypothesis. That is, a is likely to be small. • Thus, we should reject the null. • NOTE: larger c2 values result in smaller a levels (if n remains the same). But, how large is large? For example, does c2 = 12.76 represent a large enough departure (of observed frequencies) from expected frequencies to warrant rejecting the null? Check out the associated a level! a reflects whether c2 is large enough to warrant rejecting the null.

  35. Chi-Square Test of Independence • Answer: • Consult the table of probability distribution for c2 statistic to see what the actual value of ais (i.e., what is the probability that our c2 value is not large enough to be considered significant). • That is, look up the a-level associated with your c2 value (under appropriate degrees of freedom). • Degrees of Freedom: df = (r-1) (c-1) df = (2 – 1) (2 – 1) = 1 where r and c are # of rows and columns of the contingency table.

  36. a

  37. Chi-Square Test of Independence • From the table, the a level for c2 = 10.83 (with df = 1) is 0.001 . • Our c2 = 12.76 > 10.83 • QUESTION: for ourc2 = 12.76will a be smaller or greater than 0.001? • Smaller than 0.001 • Therefore, If we reject the null, the odds of being wrong will be even smaller than 1 in 1000. • Can we afford to reject the null? Is it safe to do so? • CONCLUSION? • % of males and females who smoke are not equal. • That is, smoking is a function of gender. • Can we be more specific? • Percentage of males who smoke is significantly larger than that of the females (75% vs. 31%, respectively) • CAUTION: Select the appropriate percentages to report (Row% vs. Column%)

  38. Phi (a non-parametriccorrelation for categorical data): Φ = χ2 / N = 12.76 / 100 = 0.357 (Note: sign is NA) Chi-Square Test of Independence Male FemaleTOTAL Smoker O11 = 15 O12 = 25 40 Nonsmoker O21 = 5 O22 = 55 60 TOTAL 20 80 n = 100 15 / 20 = 75% 25 / 80 = 31%

  39. Chi-Square Test of Independence VIOLATION OF ASSUMTIONS: c2test requires expected frequencies (Eij) to be reasonably large. If this requirement is violated, the test may not be applicable. SOLUTION: • For 2 x 2 contingency tables (df = 1), use the Fisher’s Exact Probability Test results (automatically reported by SPSS). • That is, look up a of the Fisher’s exact test to arrive at your conclusion. • For larger tables (df > 1), eliminate small cells by combining their corresponding categories in a meaningful way. • That is, recode the variable that is causing small cells into a new variable with fewer categories and then use this new variable to redo the Chi-Square test.

  40. Chi-Square Test of Independence Let’s now use SPSS to do the same analysis! Menu Bar: Analyze, Descriptive Statistics, Crosstabs Statistics: Chi-Square, Contingency Coefficient. Cells: Observed, Row/Column percentages (for the independent variable) SPSS File: smoker SPSS File: GSS93 Subset

  41. Chi-Square Test of Independence • Suppose we wish to examine the validity of the “gender gap hypothesis” for the 1992 presidential election between Bill Clinton, George Bush, and Ross Perot. SPSS File: Voter

  42. Correlation Coefficient (Pearson Correlation) = r What do you use it for? Karl Pearson, a Galton Student & the Founder of Modern Statistics To examine whether a relationship exists between two metric variables (e.g., income and education, or workload and job satisfaction) and what the nature and strength of that relationship may be. Range of Values for r? -1 < r < +1 Null Hypothesis when using r? r = 0 (There is no relationship between the two variables.)

  43. Correlation Coefficient: • To understand the practical meaning of r, we can square it. • What would r2 mean/represent? r2Represents the proportion (%) of the total/combined variation in both x and y that is accounted for by the joint variation (covariation) of x and y together (x with y and y with x) • How is it calculated? r2 = (Covariation of X and Y together)/ (Total variation of X & Y combined) How do we measure/quantify variations? • r2 always represents a % • Why do we show more interest in r , rather than r2?

  44. Correlation Coefficient: Computation r2 = (Covariation of X and Y together)/ (All of variation of X & Y combined) _ _ _ _ _ _ X Y X – X Y – Y (X – X) (Y – Y) (X – X)2 (Y – Y)2 4 12 -3 -4 12 9 16 6 19 -1 3 -3 1 9 9 14 2 -2 -4 4 4 . . . . . . . . . . . . . . . . . . . . . _ _ _ _ _ _ X=7 Y=16∑(X – X) (Y – Y) ∑ (X – X)2 ∑(Y – Y)2 NOTE: Once r is calculated, we need to see if it is statistically significant (if sample data). That is, we need to test H0: r = 0

  45. Correlation Coefficient? Suppose the correlation between X (say, Students’ GMAT Scores) and Y (their 1st year GPA in MBA program) is r = +0.48 and is statistically significant.How would we interpret this? GMAT score and 1st year GPA are positively related so that as values of one variable increase, values of the other also tend to increase, and 23% of variations/differences in students’ GPAs are explained by (or can be attributed to) variations/ differences in their GMAT scores. Lets now practice on SPSS Menu Bar: Analyze, Correlate, Bivariate, Pearson Using data inSPSS File Salary.savwe wish to see if beginning salary is related to seniority, age, work experience, and education

  46. STATITICAL DATA ANALYSIS COMMON TYPES OF ANALYSIS: • Examine Strength and Direction of Relationships • Bivariate (e.g., Pearson Correlation—r) • Between one variable and another: Y = a + b1 x1 • Multivariate (e.g., Multiple Regression Analysis) • Between one dep. var. and an independent variable, while holding all other independent variables constant: Y = a + b1 x1 + b2 x2 + b3 x3 + … + bkxk • Compare Groups • Proportions (e.g., Chi Square Test—2) • Means (e.g., Analysis of Variance)

  47. STATITICAL DATA ANALYSIS Chi-Square Test of Independence? • To examine whether proportions of different groups of subjects (e.g., managers vs operatives) are equal/different across two or more categories (e.g., males vs females). • To examine whether or not a relationship exists between two categorical/nominal variables (e.g., employee status and gender)--categorical dependent variable, categorical independent variable. • EXAMPLE? • Is smoking a function of gender? That is, is there a difference between the percentages of males and females who smoke?

  48. Chi-Square Test of Independence • Research Sample: IDGenderSmoking Status 1 0 = Male 1 = Smoker 2 1 = Female 0 = Non-Smoker 3 1 1 4 1 0 5 0 0 . . . . . . . . . 100 1 0 • dependent variable (smoking status) and the independent variable (gender) are both categorical. • Null Hypothesis? H0: There is no difference in the percentages of males and femaleswho smoke (Smoking is not a function of gender). QUESTION: Logically, what would be the first thing you would do?

More Related