1 / 9

Introduction to Testing a Hypothesis

Introduction to Testing a Hypothesis. Testing a treatment. Descriptive statistics cannot determine if differences are due to chance. A sampling error occurs when apparent differences are by chance alone. Example of Differences due to chance alone. Examples:.

cilicia
Download Presentation

Introduction to Testing a Hypothesis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Testing a Hypothesis Testing a treatment Descriptive statistics cannot determine if differences are due to chance. A sampling error occurs when apparent differences are by chance alone. Example of Differences due to chance alone.

  2. Examples: We know that the mean IQ of the population is 100. We selected 50 people and gave them our new IQ boosting program. This sample, when tested after the treatment, has a mean of 110. Did we boost IQ? We selected a sample of college students and a sample of university students. We found that the mean of the college students was 109 and the mean of the university students was 113. Was there a difference in the IQs of college and university students? Are both cases simply due to sampling error? Remember, the sample mean is rarely the population mean and rarely do the means of two randomly selected samples end up being exactly the same number. Sampling distribution: describes the amount of sample-to-sample variability to expect for a given statistic. Sampling Error of the mean:

  3. Simplifying Hypothesis Testing 1. Develop research hypothesis (experimental) 2. Obtain a sample(s) of observations 3. Construct a null hypothesis 4. Obtain an appropriate sampling distribution 5. Reject or Fail to Reject the null hypothesis

  4. Null Hypothesis Assume: the sample comes from the same population and that the two sample means (even though they may be different) are estimating the same value (population mean). Why? Method of Contradiction: we can only demonstrate that a hypothesis is false. If we thought that the IQ boosting programme worked, what would we actually test? What value of IQ would we test?

  5. Rejection and Non-Rejection of the Null Hypothesis If we reject, we then say that we have evidence for our experimental hypothesis, e.g., that our IQ boosting program works. If we fail to reject, we do NOT prove the null to be true. Fisher: we choose either to reject or suspend judgment. Neyman and Pearson argued for a pragmatic approach. Do we spend money on our IQ boosting or not? We must accept or reject the null. But still, accepting does not equal proving it to be true. failing to reject the null hypothesis proving the null hypothesis true

  6. Type I & Type II Errors amounts to the same things Example: the IQ boosting program We test: or Type I Error: the null hypothesis is true, but we reject it. The probability of a Type I error is set at 0.05 and is called alpha Type II Error: the null hypothesis is false, but we fail to reject it. The probability of a Type II Error is called

  7. How sure are we of our decisions? Null Hypothesis as compared to the real world Null hypothesis based on calculations

  8. Power & [a ] [ ------ b --------][ --- power ----] Note: The figure is based on the null hypothesis being false and represents the sampling distribution of the means.

  9. One-Tailed and Two Tailed Test of Significance Sampling Distribution of the Mean

More Related