1 / 57

One-Sample Hypothesis Testing

One-Sample Hypothesis Testing. 9. Chapter. Logic of Hypothesis Testing Statistical Hypothesis Testing Testing a Mean: Known Population Variance Testing a Mean: Unknown Population Variance Testing a Proportion Power Curves and OC Curves (Optional) Tests for One Variance (Optional).

Download Presentation

One-Sample Hypothesis Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. One-Sample Hypothesis Testing 9 Chapter Logic of Hypothesis Testing Statistical Hypothesis Testing Testing a Mean: Known Population Variance Testing a Mean: Unknown Population Variance Testing a Proportion Power Curves and OC Curves (Optional) Tests for One Variance (Optional)

  2. Logic of Hypothesis Testing • Steps in Hypothesis Testing Step 1: State the assumption to be tested Step 2: Specify the decision rule Step 3: Collect the data to test the hypothesis Step 4: Make a decision Step 5: Take action based on the decision

  3. Logic of Hypothesis Testing State the Hypothesis • Hypotheses are a pair of mutually exclusive, collectively exhaustive statements about the world. • One statement or the other must be true, but they cannot both be true. • H0: Null HypothesisH1: Alternative Hypothesis • These two statements are hypotheses because the truth is unknown.

  4. Logic of Hypothesis Testing State the Hypothesis • Efforts will be made to reject the null hypothesis. • If H0 is rejected, we tentatively conclude H1 to be the case. • H0 is sometimes called the maintained hypothesis. • H1 is called the action alternativebecause action may be required if we reject H0 in favor of H1.

  5. Logic of Hypothesis Testing Can Hypotheses be Proved? • We cannot prove a null hypothesis, we can only fail to reject it. Role of Evidence • The null hypothesis is assumed true and a contradiction is sought.

  6. Logic of Hypothesis Testing Types of Error • Type I error: Rejecting the null hypothesis when it is true. This occurs with probability a. • Type II error: Failure to reject the null hypothesis when it is false. This occurs with probability b.

  7. Statistical Hypothesis Testing • A statistical hypothesis is a statement about the value of a population parameter q. • A hypothesis test is a decision between two competing mutually exclusive and collectively exhaustive hypotheses about the value of q. Right-Tailed Test Left-Tailed Test Two-Tailed Test

  8. Statistical Hypothesis Testing • The direction of the test is indicated by H1: > indicates a right-tailed test < indicates a left-tailed test ≠ indicates a two-tailed test

  9. Statistical Hypothesis Testing When to use a One- or Two-Sided Test • A two-sided hypothesis test (i.e., q≠ q0) is used when direction (< or >) is of no interest to the decision maker • A one-sided hypothesis test is used when - the consequences of rejecting H0 are asymmetric, or - where one tail of the distribution is of special importance to the researcher. • Rejection in a two-sided test guarantees rejection in a one-sided test, other things being equal.

  10. Statistical Hypothesis Testing Decision Rule • A test statistic shows how far the sample estimate is from its expected value, in terms of its own standard error. • The decision rule uses the known sampling distribution of the test statistic to establish the critical value that divides the sampling distribution into two regions. • Reject H0 if the test statistic lies in the rejection region.

  11. Statistical Hypothesis Testing Decision Rule for Two-Tailed Test • Reject H0 if the test statistic < left-tail critical value or if the test statistic > right-tail critical value. Figure 9.2 - Critical value + Critical value

  12. Statistical Hypothesis Testing Decision Rule for Left-Tailed Test • Reject H0 if the test statistic < left-tail critical value. Figure 9.2 - Critical value

  13. Statistical Hypothesis Testing Decision Rule for Right-Tailed Test • Reject H0 if the test statistic > right-tail critical value. Figure 9.2 + Critical value

  14. Statistical Hypothesis Testing Type I Error • a, the probability of a Type I error, is the level of significance (i.e., the probability that the test statistic falls in the rejection region even though H0 is true). a = P(reject H0 | H0 is true) • A Type I error is sometimes referred to as a false positive. • For example, if we choose a = .05, we expect to commit a Type I error about 5 times in 100.

  15. Statistical Hypothesis Testing Type I Error • A small a is desirable, other things being equal. • Chosen in advance, common choices for a are .10, .05, .025, .01 and .005 (i.e., 10%, 5%, 2.5%, 1% and .5%). • The a risk is the area under the tail(s) of the sampling distribution. • In a two-sided test, the a risk is split with a/2 in each tail since there are two ways to reject H0.

  16. Statistical Hypothesis Testing Type II Error • b, the probability of a type II error, is the probability that the test statistic falls in the acceptance region even though H0 is false. b = P(fail to reject H0 | H0 is false) • b cannot be chosen in advance because it depends on a and the sample size. • A small b is desirable, other things being equal.

  17. Statistical Hypothesis Testing Power of a Test • The power of a test is the probability that a false hypothesis will be rejected. • Power = 1 – b • A low b risk means high power. • Larger samples lead to increased power. Power = P(reject H0 | H0 is false) = 1 – b

  18. Statistical Hypothesis Testing Relationship Between a and b • Both a small a and a small b are desirable. • For a given type of test and fixed sample size, there is a trade-off between a and b. • The larger critical value needed to reduce a risk makes it harder to reject H0, thereby increasing b risk. • Both a and b can be reduced simultaneously only by increasing the sample size.

  19. Statistical Hypothesis Testing Consequences of a Type II Error • Firms are increasingly wary of Type II errror (failing to recall a product as soon as sample evidence begins to indicate potential problems.) Significance versus Importance • The standard error of most sample estimators approaches 0 as sample size increases. • In this case, no matter how small, q – q0 will be significant if the sample size is large enough. • Therefore, expect significant effects even when an effect is too slight to have any practical importance.

  20. The difference between x and m0 is divided by the standard error of the mean (denoted sx). Testing a Mean: Known Population Variance • The hypothesized mean m0 that we are testing is a benchmark. • The value of m0 does not come from a sample. • The test statistic compares the sample mean x with the hypothesized mean m0. • The test statistic is

  21. Testing a Mean: Known Population Variance Testing the Hypothesis • Step 1: State the hypothesesFor example, H0: m< 216 mmH1: m > 216 mm • Step 2: Specify the decision ruleFor example, for a = .05 for the right-tail area, Reject H0 if z > 1.645, otherwise do not • reject H0

  22. Testing a Mean: Known Population Variance Testing the Hypothesis • For a two-tailed test, we split the risk of Type I error by putting a/2 in each tail. For example, for a = .05

  23. Testing a Mean: Known Population Variance Testing the Hypothesis • Step 3: Calculate the test statistic • Step 4: Make the decisionIf the test statistic falls in the rejection region as defined by the critical value, we reject H0 and conclude H1.

  24. Testing a Mean: Known Population Variance Analogy to Confidence Intervals • A two-tailed hypothesis test at the 5% level of significance (a = .05) is exactly equivalent to asking whether the 95% confidence interval for the mean includes the hypothesized mean. • If the confidence interval includes the hypothesized mean, then we cannot reject the null hypothesis.

  25. Testing a Mean: Known Population Variance Using the p-Value Approach • The p-value is the probability of the sample result (or one more extreme) assuming that H0 is true. • The p-value can be obtained using Excel’s cumulative standard normal function=NORMSDIST(z) • The p-value can also be obtained from Appendix C-2. • Using the p-value, we reject H0 if p-value <a.

  26. Testing a Mean: Unknown Population Variance Using Student’s t • When the population standard deviation s is unknown and the population may be assumed normal, the test statistic follows the Student’s t distribution with n = n – 1 degrees of freedom. • The test statistic is

  27. Testing a Mean: Unknown Population Variance Testing a Hypothesis • Step 1: State the hypothesesFor example, H0: m = 142H1: m ≠ 142 • Step 2: Specify the decision ruleFor example, for a = .10 for a two-tailed area, Reject H0 if t > 1.714 or t < -1.714, otherwise do not reject H0

  28. Testing a Mean: Unknown Population Variance Testing a Hypothesis • Step 3: Calculate the test statistic • Step 4: Make the decisionIf the test statistic falls in the rejection region as defined by the critical values, we reject H0 and conclude H1.

  29. Testing a Mean: Unknown Population Variance Confidence Intervals versus Hypothesis Test • A two-tailed hypothesis test at the 10% level of significance (a = .10) is equivalent to a two-sided 90% confidence interval for the mean. • If the confidence interval does not include the hypothesized mean, then we reject the null hypothesis.

  30. Testing a Proportion • To conduct a hypothesis test, we need to know- the parameter being tested- the sample statistic- the sampling distribution of the sample statistic • The sampling distribution tells us which test statistic to use. • A sample proportion p estimates the population proportion p. • Remember that for a large sample, p can be assumed to follow a normal distribution. If so, the test statistic is z.

  31. x n number of successes sample size = p = p – p0 sp zcalc= p0(1-p0) n Where sp = Testing a Proportion • If np0> 10 and n(1-p0) > 10, then

  32. Testing a Proportion • The value of p0 that we are testing is a benchmark such as past experience, an industry standard, or a product specification. • The value of p0 does not come from a sample.

  33. Testing a Proportion Critical Value • The test statistic is compared with a critical value from a table. • The critical value shows the range of values for the test statistic that would be expected by chance if the H0 were true.

  34. Testing a Proportion Steps in Testing a Proportion • Step 1: State the hypothesesFor example, H0: p> .13H1: p < .13 • Step 2: Specify the decision ruleFor example, for a = .05for a left-tail area, reject H0 if z < -1.645, otherwise do not • reject H0 Figure 9.12

  35. Testing a Proportion Steps in Testing a Proportion • For a two-tailed test, we split the risk of type I error by putting a/2 in each tail. For example, for a = .05 Figure 9.14

  36. p – p0 sp z = p0(1-p0) n Where sp = Testing a Proportion Steps in Testing a Proportion • Now, check the normality assumption: np0> 10 and n(1-p0) > 10. • Step 3: Calculate the test statistic • Step 4: Make the decisionIf the test statistic falls in the rejection region as defined by the critical value, we reject H0 and conclude H1.

  37. Testing a Proportion Using the p-Value • The p-value is the probability of the sample result (or one more extreme) assuming that H0 is true. • The p-value can be obtained using Excel’s cumulative standard normal function=NORMSDIST(z) • The p-value can also be obtained from Appendix C-2. • Using the p-value, we reject H0 if p-value <a.

  38. Testing a Proportion Using the p-Value • The p-value is a direct measure of the level of significance at which we could reject H0. • Therefore, the smaller the p-value, the more we want to rejectH0.

  39. Testing a Proportion Calculating a p-Value for a Two-Tailed Test • For a two-tailed test, we divide the risk into equal tails. So, to compare the p-value to a, first combine the p-values in the two tail areas. • For example, if our test statistic was -1.975, then 2 x P(z < -1.975) = 2 x .02413 = .04826 At a = .05, we would reject H0 since p-value = .04826 < a.

  40. z = 2.152 2.576 - 2.576 1.645 - 1.645 Testing a Proportion Effect of a • No matter which level of significance you use, the test statistic remains the same. For example, for a test statistic of z = 2.152

  41. Testing a Proportion Small Samples and Non-Normality • In the case where np0 < 10, use MINITAB to test the hypotheses by finding the exact binomial probability of a sample proportion p. For example, Figure 9.19

  42. Power Curves and OC Curves (Optional) Power Curves for a Mean • Power depends on how far the true value of the parameter is from the null hypothesis value. • The further away the true population value is from the assumed value, the easier it is for your hypothesis test to detect and the more power it has. • Remember that b = P(accept H0 | H0 is false) Power = P(reject H0 | H0 is false) = 1 – b

  43. Power Curves and OC Curves Power Curves for a Mean • We want power to be as close to 1 as possible. • The values of b and power will vary, depending on - the difference between the true mean m and the hypothesized mean m0,- the standard deviation, - the sample size n and - the level of significance a Power = f(m – m0, s, n, a)

  44. Power Curves and OC Curves Power Curves for a Mean Table 9.8 • We can get more power by increasing a, but we would then increase the probability of a type I error. • A better way to increase power is to increase the sample size n.

  45. For any given values of m, s, n, and a, and the assumption that X is normally distributed, use the following steps to calculate b and power. Power Curves and OC Curves Calculating Power • Step 1: Find the left-tail critical value for the sample mean.

  46. Power Curves and OC Curves Calculating Power • The decision rule is:Reject H0 if x < xcritical • The probability of b error is the area to the right of the critical value xcritical which represents P(x > xcritical | m = m0)

  47. Step 2: Express the difference between the critical value xcritical and the true mean m as a z-value: b = P(x > xcritical|m = m0) Power = P(x < xcritical | m = m0) = 1 – b Power Curves and OC Curves Calculating Power • Step 3: Find the b risk and power as areas under the normal curve using Appendix C-2 or Excel.

  48. Other things being equal, if sample size were to increase, b risk would decline and power would increase because the critical value xcritical would be closer to the hypothesized mean m. Power Curves and OC Curves Effect of Sample Size

  49. Power Curves and OC Curves Relationship of the Power and OC Curves • Here is a family of power curves. Figure 9.22

  50. Power Curves and OC Curves Relationship of the Power and OC Curves • The graph of b risk against this same X-axis is called the operating characteristic or OC curve. Figure 9.23

More Related