1 / 29

The Student’s T-test and other tests for significance

The Student’s T-test and other tests for significance. The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups.

landgrafm
Download Presentation

The Student’s T-test and other tests for significance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Student’s T-test and other tests for significance

  2. The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups.

  3. What does it mean to say that the averages for two groups are statistically different? • Notice that the three situations is that the difference between the means is the same in all three. • But, the three situations don't look the same -- they tell very different stories. • The top example shows a case with moderate variability of scores within each group. • The second situation shows the high variability case. • the third shows the case with low variability. • The two groups appear most different or distinct in the bottom or low-variability case. • Why? Because there is relatively little overlap between the two bell-shaped curves.

  4. The formula for the t-test is a ratio. The top part of the ratio is just the difference between the two means or averages. The bottom part is a measure of the variability or dispersion of the scores.

  5. The top part of the formula is easy to compute -- just find the difference between the means. The bottom part is called the standard error of the difference. • To compute it, we take the variance for each group and divide it by the number of samples in that group. We add these two values and then take their square root • The formula is: Formula for the Standard error of the difference between the means

  6. Remember, the variance is simply the square of the standard deviation. • The final formula for the t-test is: Formula for the t-test.

  7. Cedar-apple rust is a (non-fatal) disease that affects apple trees. Its symptom is rust-colored spots on apple leaves. Red cedar trees are the immediate source of the fungus that infects the apple trees. If you could remove all red cedar trees within a few miles of the orchard, you should eliminate the problem. In the first year of this experiment the number of affected leaves on 8 trees was counted; the following winter all red cedar trees within 100 yards of the orchard were removed and the following year the same trees were examined for affected leaves. The results are recorded on the next panel:

  8. Data • Tree number of rusted number of rusted difference: 1-2 leaves: year 1 leaves: year 2 1 38 32 6 • 2 10 16 -6 • 84 57 27 • 36 28 8 • 50 55 -5 • 35 12 23 • 73 61 12 • 48 29 19 average 46.8 36.2 10.5 standard dev 23 19 12 Determine whether there was a significant change in the number of rusted leaves between years 1 and 2. Did the treatment cure the problem?

  9. Significant????? • Sometimes, when the statistical In a scientific study, a theory is proposed, then data is collected and analyzed. The statistical analysis of the data will produce a number that is statistically significant if it falls below 5%, which is called the confidence level. In other words, if the likelihood of an event is statistically significant, the researcher can be 95% confident that the result did not happen by chance. • Significance of an experiment is very important, such as the safety of a drug meant for humans, the statistical significance must fall below 3%. In this case, a researcher could be 97% sure that a particular drug is safe for human use. This number can be lowered or raised to accommodate the importance and desired certainty of the result being correct. • Statistical significance is used to reject or accept what is called the null hypothesis. A hypothesis is a statement of the theory that a researcher is trying to prove. The null hypothesis holds that the factors a researcher is looking at have no effect on differences in the data. Statistical significance is usually written, for example, t=.02, p<.05. Here, "t" stands for the statistic test score and "p<.05" means that the probability of an event occurring by chance is less than 5%. These numbers would cause the null hypothesis to be rejected, therefore affirming that the particular theory is true.

  10. Other Statistical Testswww.wikipedia.org

  11. Standard Deviation • The standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values (wiki). • The standard deviation is the most common measure of statistical dispersion, measuring how widely spread the values in a data set are. If the data points are close to the mean, then the standard deviation is small. As well, if many data points are far from the mean, then the standard deviation is large. If all the data values are equal, then the standard deviation is zero. • www.wiki.com

  12. 68-95-99.7 rule Dark blue is less than one standard deviation from the mean. For the normal distribution, this accounts for 68.27 % of the set; while two standard deviations from the mean (medium and dark blue) account for 95.45 %; and three standard deviations (light, medium, and dark blue) account for 99.73 %.

  13. Why??? • The standard deviation can also help you evaluate the worth of all those so-called "studies" that seem to be released to the press everyday. A large standard deviation (greater than 10% of the mean) in a study that claims to show a relationship between eating Twinkies and better SAT scores, for example, might tip you off that the study's claims aren't all that trustworthy.

  14. Coefficient of Variance • The coefficient of variance (CV) measures the precision of the person/s during a set of individual tests (replicates) performed for one specific water quality parameter. As an example, if a team has just finished collecting data on 5 replicates of dissolved oxygen data, the team can use the coefficient of variance formula to determine how precisely they performed the data. The higher the precision (the lower the %), the higher the likelihood that there was no difference in the way each replicate/ individual test was performed. In other words, data within a data set which is collected consistently should hypothetically have a Coefficient of Variance equal to zero percent!

  15. The Formula • Calculating the Coefficient of Variance • s = standard deviation • X = average • You can calculate the CV for the 3-5 replicates for a single sampling. • Distributions with CV < 1 are considered low-variance (that’s good), while those with CV > 1 are considered high-variance (that’s bad).

  16. Percent Error • The percent error can be determined when the true value is compared to the observed value according to the equation below: • % error =   | your result - accepted value |    x  100 %                                 accepted value • Less than 5% error is acceptable

  17. Additional Slides

  18. A Tale of Two Tails • Directional hypotheses are called one-tailed • We are only interested in deviations at one tail of the distribution • Non-directional hypotheses are called two-tailed –We are usually interested in anysignificant deviations from the null hypothesis

  19. How do you decide to use a one-or two-tailed approach?

  20. One Tail or Two? The moderate approach: • If there’s a strong, prior, theoretical expectation that the effect will be in a particular direction (A>B), then you may use a one-tailed approach. Otherwise, use a two-tailed test. • Because only an A>B result is interesting, concentrate your attention on whether there is evidence for a difference in that direction. • E.G. does this new educational reform improve students’ test scores? • Does this drug reduce depression?

  21. One tail or two? The more conservative approach: • The problem with the moderate approach is that you probably would actually find it interesting if the result went the other way, in many cases. – If the new educational reform leads to worse test scores, we’d want to know! – If the new drug actually increases symptoms of depression, we’d want to know!

  22. One tail or two? The more conservative approach: • Only use a one-tailed test if you have a strong hypothesis about the directionality of the results (A>B) AND it could also be argued that a result in the “wrong tail” (A<B) is meaningless, and might as well be due to chance.

  23. One tail or two: The most conservative approach • Always use two-tailed tests! • Correcting for one-vs. two-tailed tests • If you think a researcher has run the wrong kind of test, it’s easy to recalculate the p-value yourself: • P (one-tailed) = ½ P (two-tailed)

  24. Was the result significant? • There is no true sharp dividing line between probable and improbable results. • There’s little difference between p=0.051 and p=0.049, except that some journals will not publish results at p=0.051, and some readers will accept results at p=0.049 but not at p=0.051. • In any case it does not tell us if the result is IMPORTANT!

  25. Decision theory and tradeoffs between types of errors • Think of a household smoke detector. • Sometimes it goes off and there’s no fire (you burn some toast, or take a shower). –A false alarm. –A Type I error. • Easy to avoid this type of error: take out the batteries! • However, this increases the chance of a Type II error: there’s a fire, but no alarm.

  26. Decision theory and tradeoffs between types of errors • Similarly, one could reduce the chances of a Type II error by making the alarm hypersensitive to smoke. • Then the alarm will very likely go off in a fire. • But you’ll increase your chances of a false alarm = Type I error. (The alarm is more likely to go off because someone sneezed.) • There is typically a tradeoff of this sort between Type I and Type II errors.

More Related