1 / 12

By Daniel Park and Zach Ney

Significance Tests. By Daniel Park and Zach Ney. Logic of significance testing. Significance tests are a procedure for comparing observed data with a claim/hypothesis. Null hypothesis is the hypothesis that there is no difference in probability between the observed and expected values.

norah
Download Presentation

By Daniel Park and Zach Ney

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Significance Tests By Daniel Park and Zach Ney

  2. Logic of significance testing • Significance tests are a procedure for comparing observed data with a claim/hypothesis. • Null hypothesis is the hypothesis that there is no difference in probability between the observed and expected values. • Alternative hypothesis is the hypothesis that states that there is a difference in probability between the observed and the expected. • In significance tests we are trying to find evidence against the null hypothesis and accept the alternative.

  3. Steps for making a significance test • Step 1: State the parameter. For example if we are testing to see if Shaqs claim that he makes 16/20 or 80 percent of his free throws at the gym yesterday is true given that when he tried it again he only made 8/20. So the parameter in this case would be the true difference in proportions of free throws between the two events.

  4. Steps for making a significance test • Step 2: Define your null and alternative hypothesis. • You can have either a one sided or two sided alternative hypothesis one sided means that you are taking one of the tail probabilities and a two sided means that you are taking both probabilities. • In this case the null hypothesis would be that the proportion of free throws made is .80. or No:P=.8 • The alternative in this case would be the probabilility that it is less that .8 so a one sided test. So Na: P<.8 • If it were a two sided test the Na would be Na: P does not equal .8

  5. Steps for making a significance test • Step 3 Check your conditions. A. Needs to be an SRS The problem should state if it is an SRS or not B. Is it normal or approximately normal. This can be determined by the following formulas Proportions N*P> or= 10 and N(1-P)> or = 10 Means N>or = 30 or N. or = 15 if there are no outliers or strong skewness. C. Is it independent? This is the 10% rule where the sample should be less than 10% of the population. You do not need to check this condition if it is independent such as births. Knowing the gender of one child at birth does not effect the gender of the next baby being born.

  6. Steps for making a significance test • Now you need to calculate the actual test the first way is to do it by hand. • Test statistic = Statistic-Parameter/ standard deviation of the statistic. • For proportions: Z= P^ -Po / square root of Po*(1-Po) over N • For means t= x bar – Mo over s over the square root of N • Once you get a Z/T you can use the tables in the back of the book or your calculator to find your P value. On calc for Z it is Normcdf(min,max) for T it is Normcdf(min,max,df) • This value will then tell you whether or not you can reject the null or not. If this p value it gives you is smaller than your significance level you can reject the null and accept the alternative. The ways you would say this are. • When P is less than sig. level We have sufficient evidence to reject the null and can conclude that the alternative. • When P is greater you say We do not have sufficent evidence to reject the null and cannot conclude Alternative.

  7. Conclusion • When P is less than sig. level We have sufficient evidence to reject the null and can conclude that the alternative. • When P is greater you say We do not have sufficent evidence to reject the null and cannot conclude Alternative. • For example on the basketball problem if p is less than .05 we would say We have sufficent evidence to reject the null and can conclude that his free throw percentage is indeed less than 80% • If p is greater than .05 we would say that we do not have sufficient evidence to reject the null and cannot conclude that the percentage is less than 80%

  8. Extras • You can perform these tests with different situations such as significance tests for comparing two proportions and so on and so forth. In these cases you follow the same basic four steps.

  9. Types of Errors • Type I Error: Reject H0 when H0 is actually true • “α” is the probability of making a Type I Error • Type II Error: Fail to reject H0 when H0 is false • “β” is the Probability of making a Type II Error

  10. Power of the Test • Power is the probability that H0 was correctly rejected • Power = 1 − β • Higher values of Power are better • Lower values suggest a higher chance of a Type II Error • Larger Sample Size = Higher Power • Standard desired level of Power is 80% • Larger α also increases Power

  11. Sources • Chapters 6 and 7 notes • Chapters 6 and 7 from the AP statistics book

More Related