1 / 8

Significance, Importance, and Undetected Differences

Significance, Importance, and Undetected Differences. Real Importance versus Statistical Significance. A statistically significant relationship or difference does not necessarily mean an important one .

raven-good
Download Presentation

Significance, Importance, and Undetected Differences

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Significance, Importance, and Undetected Differences Real Importance versus Statistical Significance • A statistically significant relationship or difference does not necessarily mean an important one. • Whether results are statistically significant or not, it is helpful to examine a confidence interval so that you can determine the magnitudeof the effect. • From width of the confidence interval, also learn how much uncertainty there was in sample results. • Example : Is Aspirin Worth the Effort? • Relationship between taking aspirin and incidence of heart attack. Null (no relationship) vs alternative (yes relationship), chi-squared (test) statistic over 25 with p-value < 0.00001. • The Magnitude of the Effect • The test statistic and p-value do not provide information about the magnitude of the effect. • Representing the Size of the Effect • Rates of heart attack: 9.4 per 1000 for aspirin group and 17.1 per 1000 for placebo group, difference < 8 people per 1000, about 1 less heart attack for every 125 who took aspirin. • Relative risk: Aspirin group had half as many heart attacks; so could cut risk almost in half. Estimated relative risk as 0.53, with a 95% confidence interval extending from 0.42 to 0.67.

  2. Significance, Importance, and Undetected Differences Role of Sample Size in Width of Confidence Intervals • Precision = variation/square root of the sample size • In a study looking at a continuous variable such as age, height or blood pressure, that variation is • expressed in terms of standard deviation: • a large sample standard deviation of say, age, means that the individuals in your study are all sorts of ages from young to old (e.g., a study of voters); • a small sample standard deviation means that most of those in your study are a similar age (e.g., a study of professional football players). • So if you want to make your results more precise, you need to increase your sample size: put more patients on a clinical trial for example • But precision is related to the square root of sample size, so if you want to double your precision you need to quadruple your sample size.

  3. Significance, Importance, and Undetected Differences Role of Sample Size in Statistical Significance There is almost always a slight relationship between two variables, or a difference between two groups, and if you collect enough data, you will find it. If the sample size is too small, an important relationship or difference can go undetected. In that case, we would say that the powerof the test is too low.

  4. Significance, Importance, and Undetected Differences Type I, Type II errors and Power of a Test

  5. Significance, Importance, and Undetected Differences The Power of a Test - Calculating Sample Size • The power of a test is the probability of making the correct decision when the alternative hypothesis is true. • Medical Testing Example: Detecting that a person has the disease when they truly have it, a true positive. • The typical question that scientists ask isn't, "OK, I've got 100 people here. What are my chances of a positive result?" but, "I want to test a hypothesis. How many people will I need?" • This sort of question is particularly important in medical research. • In a trial of a new drug, for example, you want to have a good chance of a statistically significant result if the drug is effective because it would be great to have another way to help sick people. • But you can't have too large a sample size-drugs often have side effects, so you don't want to • give a new drug to lots of people if it doesn't work.

  6. Significance, Importance, and Undetected Differences The Power of a Test - Calculating Sample Size • A typical question asked of statisticians in medical research might be something like: • About 20% of patients recover from a typical cold within 36 hours. We think that our new cold treatment might increase this to 30%. • How big a sample do we need to avoid making a type II error – failing to reject the null hypothesis when it is false – detecting a true difference. • As a general rule, we use the usual alpha of 5% and a power of 80%, that is, if the drug is effective, we want a 80% chance of showing that it does indeed work. • How many patients do we need?

  7. Significance, Importance, and Undetected Differences

  8. META-ANALYSIS SOME BENEFITS OF META-ANALYSIS 1. Detecting small or moderate relationships 2. Obtaining a more precise estimate of a relationship 3. Determining future research 4. Finding patterns across studies CRITICISMS OF META-ANALYSIS 1. Simpson’s Paradox 2. Confounding variables 3. Subtle differences in treatments of the same name 4. The file drawer problem 5. Biased or flawed original studies 6. Statistical significance versus practical importance

More Related