Loading in 5 sec....

Section 6.2 Tests of SignificancePowerPoint Presentation

Section 6.2 Tests of Significance

- 121 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Section 6.2 Tests of Significance' - kelsie-ball

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Section 6.2 Tests of Significance

Our goal is to assess the evidence provided by the data in favor of some claim about the population.

Hypotheses are statements about the parameter.

Ho is called the “Null Hypothesis”: It is the statement being tested. Usually a statement of “no effect” or “no difference.”Always includes equality.

Ha is called the “Alternative Hypothesis”: It is the statement we suspect or hope is true. It expresses the effect we hope to find evidence for.Never includes equality. May be one-sided ( “<” or “>” ) or two-sided ( “ ≠ ” )

We use the sample data to draw a conclusion about the hypotheses. If the sample data results are quite different from what the null hypothesis Ho claims, then we suspect that the difference is due to some other effect than just random chance. In this case, we reject Ho and say that the data are statistically significant. If we do not reject the null hypothesis, we say that the data are not statistically significant.

We use the test statistic (in this case a z-score) to calculate the probability that we could get a statistic as extreme or more extreme as the one we got from our sample data if Ho were true.

This probability is called the p-value of the test. The smaller the p-value, the stronger the evidence against Ho provided by the data.

There will be five steps in doing such a test: calculate the probability that we could get a statistic as extreme or more extreme as the one we got from our sample data if H

State the hypotheses

Calculate the value of the test statistic

Calculate the p-value of the test

Make a decision about the null hypothesis (we will decide whether or not to reject Ho)

State a conclusion(the conclusion will address Ha)

The One-Sample z Test for a Population Mean calculate the probability that we could get a statistic as extreme or more extreme as the one we got from our sample data if H

To test the hypothesis Ho: µ = µo based on a SRS of size n from a population with unknown mean µ and known standard deviation σ , compute the test statistic

In terms of a standard normal random variable Z, the P-value for a test of Ho against

Ha: µ > µo is P(Z ≥ z)

Ha: µ < µo is P(Z ≤ z)

Ha: µ ≠ µo is 2 * P(Z ≥ |z| )

the P-value for a test of H calculate the probability that we could get a statistic as extreme or more extreme as the one we got from our sample data if Ho against

Ha: µ > µo is P(Z ≥ z)The shaded area to the right of z is the p-value of this “one-sided” test.

z

the P-value for a test of H calculate the probability that we could get a statistic as extreme or more extreme as the one we got from our sample data if Ho against

Ha: µ < µo is P(Z ≤ z) The shaded area to the left of z is the p-value of this “one-sided” test.

z

the P-value for a test of H calculate the probability that we could get a statistic as extreme or more extreme as the one we got from our sample data if Ho against

Ha: µ ≠ µo is 2 * P(Z ≥ |z| ) The total shaded area in both ends is the p-value of this “two-sided” test.

– |z|

|z|

P-values are exact when the population is normally distributed. They are approximate when n is large (at least 30) in other cases.

If we reject Ho when Ho is in fact true, this is a Type I error. If we fail to reject Ho when in fact Ho is false, this is a Type II error.

The significance level, α, is the probability of making a Type I error – of rejecting Ho when Ho is really true.

α = P(rejecting Ho when it’s really true)

We never “prove” that H error. If we fail to reject Ho when in fact Ho is false, this is a Type II error.o is true – we only are unable to find enough evidence to indicate that Ho should be rejected.In our court system, a defendant is considered to be innocent until proven guilty (beyond a reasonable doubt).O.J. Simpson was not convicted. Does this prove that he is innocent? No – it just means that the jury did not find enough evidence to convict him.

We never can “prove” anything – error. If we fail to reject Ho when in fact Ho is false, this is a Type II error.we can only assess probabilities and make decisions based on those probabilities.

The p-value of the test gives us the smallest significance level α for which the sample data tell us to reject Ho; i.e., the smallest level at which the data are statistically significant.The advantage of knowing the p-value is that we know all levels of significance for which the observed sample statistic tells us to reject Ho. Many research journals require authors to include the p-value of the observed sample statistic. Then readers will have more information and will know the test conclusion for any pre-set level of significance.

p-value is ≤ level αwe say the data are statistically significant at level α, and we reject Ho in favor of Ha.

Our conclusion: there is sufficient evidence at the α level of significance to support Ha.

p-value is > αwe say the data are not statistically significant at level α, and we fail to reject Ho in favor of Ha.Our conclusion: there is not sufficient evidence at the α level of significance to support Ha.

Download Presentation

Connecting to Server..