1 / 25

Chapter 12: Testing Hypotheses

Chapter 12: Testing Hypotheses. Overview Research and null hypotheses One and two-tailed tests Errors Testing the difference between two means t tests. You already know how to deal with two nominal variables. Independent Variables. Nominal Interval.

Download Presentation

Chapter 12: Testing Hypotheses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 12: Testing Hypotheses • Overview • Research and null hypotheses • One and two-tailed tests • Errors • Testing the difference between two means • t tests

  2. You already know how to deal with two nominal variables Independent Variables Nominal Interval Considers the distribution of one variable across the categories of another variable Considers how a change in a variable affects a discrete outcome Dependent Variable Interval Nominal Considers the difference between the mean of one group on a variable with another group Considers the degree to which a change in one variable results in a change in another Overview

  3. Overview TODAY! Testing the differences between groups You already know how to deal with two nominal variables Independent Variables Nominal Interval Considers how a change in a variable affects a discrete outcome Lambda Dependent Variable Interval Nominal Considers the difference between the mean of one group on a variable with another group Considers the degree to which a change in one variable results in a change in another

  4. Overview TODAY! Testing the differences between groups You already know how to deal with two nominal variables Independent Variables Nominal Interval Considers how a change in a variable affects a discrete outcome Lambda Dependent Variable Interval Nominal Considers the degree to which a change in one variable results in a change in another Confidence Intervals t-test

  5. Is one group scoring significantly higher on average than another group? Is a group statistically different from another on a particular dimension? Is Group A’s mean higher than Group B’s? General Examples

  6. Specific Examples Do people living in rural communities live longer than those in urban or suburban areas? Do students from private high schools perform better in college than those from public high schools? Is the average number of years with an employer lower or higher for large firms (over 100 employees) compared to those with fewer than 100 employees?

  7. Testing Hypotheses • Statistical hypothesis testing– A procedure that allows us to evaluate hypotheses about population parameters based on sample statistics. • Research hypothesis (H1) – A statement reflecting the substantive hypothesis. It is always expressed in terms of population parameters, but its specific form varies from test to test. • Null hypothesis (H0) – A statement of “no difference,” which contradicts the research hypothesis and is always expressed in terms of population parameters.

  8. Research and Null Hypotheses • One Tail — specifies the hypothesized direction • Research Hypothesis: • H1: 2 1, or 2 1 > 0 • Null Hypothesis: • H0: 2 1, or 2 1 = 0 • Two Tail — direction is not specified (more common) • Research Hypothesis: • H1: 2 = 1, or 2 1 = 0 • Null Hypothesis: • H0: 2 1, or 2 1 = 0

  9. One-Tailed Tests • One-tailed hypothesis test– A hypothesis test in which the alternative is stated in such a way that the probability of making a Type I error is entirely in one tail of a sampling distribution. • Right-tailed test – A one-tailed test in which the sample outcome is hypothesized to be at the right tail of the sampling distribution. • Left-tailed test – A one-tailed test in which the sample outcome is hypothesized to be at the left tail of the sampling distribution.

  10. Two-Tailed Tests • Two-tailed hypothesis test– A hypothesis test in which the region of rejection falls equally within both tails of the sampling distribution.

  11. Probability Values • Z statistic (obtained) – The test statistic computed by converting a sample statistic (such as the mean) to a Z score. The formula for obtaining Z varies from test to test. • P value – The probability associated with the obtained value of Z.

  12. Probability Values

  13. Probability Values • Alpha ( ) – The level of probability at which the null hypothesis is rejected. It is customary to set alpha at the .05, .01, or .001 level.

  14. Five Steps to Hypothesis Testing • Making assumptions (2) Stating the research and null hypotheses and selecting alpha (3) Selecting the sampling distribution and specifying the test statistic (4) Computing the test statistic (5) Making a decision and interpreting the results

  15. Type I and Type II Errors Based on sample results, the decision made is to… reject H0 do not reject H0 In thetrueType I correct populationerror ()decision H0 is ... false correct Type II error decision • Type I error (false rejection error)the probability (equal to ) associated with rejecting a true null hypothesis. • Type II error (false acceptance error)the probability associated with failing to reject a false null hypothesis.

  16. t Test • t statistic (obtained) – The test statistic computed to test the null hypothesis about a population mean when the population standard deviation is unknow and is estimated using the sample standard deviation. • t distribution – A family of curves, each determined by its degrees of freedom (df). It is used when the population standard deviation is unknown and the standard error is estimated from the sample standard deviation. • Degrees of freedom (df) – The number of scores that are free to vary in calculating a statistic.

  17. t distribution

  18. t distribution table

  19. t-test for difference between two means Is the value of 2 1 significantly different from 0? This test gives you the answer: If the t value is greater than 1.96, the difference between the means is significantly different from zero at an alpha of .05 (or a 95% confidence level). The difference between the two means  the estimated standard error of the difference The critical value of t will be higher than 1.96 if the total N is less than 122. See Appendix C for exact critical values when N < 122.

  20. Estimated Standard Error of the difference between two meansassuming unequal variances

  21. t-test and Confidence Intervals The t-test is essentially creating a confidence interval around the difference score. Rearranging the above formula, we can calculate the confidence interval around the difference between two means: If this confidence interval overlaps with zero, then we cannot be certain that there is a difference between the means for the two samples.

  22. Why a t score and not a Z score? • Use of the Z distribution has assumes the population standard error of the difference is known. In practice, we have to estimate it and so we use a t score. • When N gets larger than 50, the t distribution converges with a Z distribution so the results would be identical regardless of whether you used a t or Z. • In most sociological studies, you will not need to worry about the distinction between Z and t.

  23. t-Test Example 1 Mean pay according to gender: N Mean Pay S.D. Women 46 $10.29 .8766 Men 54 $10.06 .9051 What can we conclude about the difference in wages?

  24. t-Test Example 2 Mean pay according to gender: N Mean Pay S.D. Women 57 $9.68 1.0550 Men 51 $10.32 .9461 What can we conclude about the difference in wages?

  25. In-Class Exercise Using these GSS income data, calculate a t-test statistic to determine if the difference between the two group means is statistically significant.

More Related