1 / 31

To P or not to P???: Does a statistical test holds what it promises?

To P or not to P???: Does a statistical test holds what it promises?. There is increasing concern that in modern research , false findings may be the majority or even the vast majority of published research claims . ” ,” Ioannidis (2005, PLoS Medicine ).

lulu
Download Presentation

To P or not to P???: Does a statistical test holds what it promises?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. To P or not to P???: Does a statistical test holds what it promises? There is increasing concernthat in modernresearch, false findings may be the majority or even the vastmajority of published research claims.”,” Ioannidis (2005, PLoS Medicine) Probably more than 70% of all medical and biological scientific studies are irreproducible!

  2. We compare the effects two drugs to control blood pressure in two groups of patients We test the effect of a drug to control blood pressure against a null control group We test for a significant correlation ) We compare an observation against a null expectation We use the t-test to assess the validity of a hypothesis. We compare two observations We use the t-test to calculate a probability of difference. We compare an observed statistic against an unobserved null assumption.

  3. An intuitive significance test Formally, we test H1: r2 = 0.57 against the alternative H0 : r2 = 0 We test an observation against a specific null assumption. We compare two specific values of r2. This is not the same as to test the hypothesis that X and Y are correlated against the hypothesis that X and Y are not correlated. X and Y might not be correlated but have a r2 ≠ 0. This appears if X and Y are jointly constraint by marginal settings.

  4. Number of storks and reproductive rates in Europe (Matthews 2000) Excel getsplotsat log-scaleswrong. r2 = 0.25; P < 0.05 Pseudocorrelationsbetween X and Y arise when X = f(U) Y = g(U) f = h(g) Birth rate Storks Storks The sample spaces of both variables are constraint by one or more hidden variables that are itself correlated. Birth rate Urbanisation

  5. Some basic knowledge The sum of squared normal distributed variates is approximately c2 distributed with k degrees of freedom. The sum of differently distributed variates is approximately normally distributed with k-1 degrees of freedom (central limit theorem). A Poisson distribution has s2 = m The c2 test The likelihood of a function equals the probability to obtain the observed data with respect to certain to parameter values. The maximum likelihood refers to those parameter values of a function (model) that maximize the function given that data.

  6. Likelihoodratios (odds) Sir Ronald Fisher =+ The sum of two c2 distributions has k = k1+k2 degrees of freedom. The log-quotient of two normally distributed random variates is c2 distributed. Fisher argued that hypotheses can be tested using likelihoods. )

  7. Classical frequentist hypothesis testing We throw 100 times a coin and get 59 times the head. Is the coin fair? Fisher would contrast two probabilities of a binomial process given the outcome of 59 heads. P = 1/2 P = 59/100 Likelihood with P = ½ and P = 59/100 estimates

  8. The odds in favour of H1 is 0.016/0.081 = 0.2. H1 is five time more probable than H0. The probability that the observed binomial probability q1 = 59/100 differs from q1 = ½ given the observed data is Pobs = 0.93. The probability in favour of the null assumption is therefore P0 = 1-0.93 = 0.07. According to Fisher the test failed to reject the hypothesis of P = 59/100. • Fisher: • The significance P of a test is the probability of the hypothesis given the data! • The significance P of a test refers to a hypothesis to be falsified. • It is the probability to obtain an effect in comparison of a random assumption. • As a hypothesis P is part of the discussion of a publication. In Fisher’s view a test should falsify a hypothesis with respect to a null assumption given the data. This is in accordance with the Popper - Lakatos approach to scientific methodology.

  9. The Pearson – Neyman framework Egon Pearson Jerzy Neyman The likelihood result Pearson-Neyman asked what is the probability of the data given the model! The significance value a of a test is the probability (the evidence) against the null assumption. Type I error H1 true H0 true P is the probability to reject H0 given that H0 is true (the type I error rate). It is not allow to equal P and Q, the probability to reject H1 given that H1 is true (the type II error rate). 1-P P Reject H0 Q 1-Q Reject H1 Type II error

  10. Classical frequentist hypothesis testing P P Distribution of b under H0 Cumulative distribution of b under H0 P(H1) H1 H0 b b Test value For 50 years Pearson and Neyman won because their approach is simpler in most applications. • Pearson-Neyman: • The significance P of a test is the probability that our null hypothesis is true in comparison a to precisely defined alternative hypothesis. • This approach does not raise concerns if we have two and only two contrary hypotheses (tertium non datur). • As a result P is part of the results section of a publication. In the view of Pearson and Neyman a test should falsify a null hypothesis with respect to the observation.

  11. Fisher: Pearson-Neyman: A test aims at falsifying a hypothesis. A test aims at falsifying a null assumption. We test for differences in the model parameters. We test against assumed data that have not been measured. P values are part of the hypothesis development. P values are central to hypothesis testing. We test against something that has not been measured. We test the observed data. P is not the probability that H0 is true!! 1-P is not the probability that H1 is true!! Rejecting H0 does not mean that H1 is true. Rejecting H1 does not mean that H0 is true. • The test does not rely on prior information. • It does not consider additional hypothesis. • The result is invariant of the way of testing.

  12. A word on logic Modus tollens If Ryszard is from Poland he is probably not a member of Sejm.Probably Ryszard is a member of Sejm. Thus he is probably not a citizen of Poland. If P(H1) > 0.95 H0 is probably false.H0 is probably true. P(H1) < 0.95. This does not mean that H1 is probably false. It only means that we don’t know. If multiple null assumptions are possible the results of classical hypothesis testing are difficult to interpret. If multiple hypotheses are contrary to a single null hypothesis the results of classical hypothesis testing are difficult to interpret. Pearson-Neyman and Fisher testing works always properly if there are two and only two truly contrasting alternatives.

  13. Examples The pattern of co-occurrences of the two species appeared to be random (P(H0) > 0.3). (we cannot test for randomness) We reject our hypothesis about antibiotic resistences in the Bacillus thuringiensis strains P(H1) > 0.1. (we can only reject null hypotheses) The two enzymes did not differ in substrate binding efficacy (P > 0.05). (we do not know) Time of acclimation and type of injection significantly affected changes in Tb within 30 min after injection (three-way ANOVA: F5;461 = 2:29; P<0.05). (with n = 466, time explains 0.5% variation) The present study has clearly confirmed the hypothesis that non-native gobies are much more aggressive fish than are bullheads of comparable size... This result is similar to those obtained for invasive round goby in its interactions with the native North American cottid. (F1,14 = 37.83); (if others have found the same, we rather should test for lack of difference. The present null assumption is only a straw man).

  14. The Bayesian philosophy The law of conditional probability Theorem of Bayes Theorem of Bayes Abraham de Moivre (1667-1754) Thomas Bayes (1702-1761)

  15. A frequentist test provides a precise estimate of probability P P Post is independent of prior 0.99 0.9 0 0.1 0.5 Under a frequentist interpretation a statistical test provides an estimate of the probability in favour of our nullhypothesis. In the frequentist interpretation probability is an objective reality. DP P P Post is mediated by prior 0.99 0.9 0 0.1 0. 5 A Bayesian interpretation of probability Under a Bayesian interpretation a statistical test provides an estimate of how much a test shifted an initial assumption about the level of probability in favour of our hypothesis towards statistical significance. Significance is the degree of belief based on prior knowledge.

  16. The earth is round: P < 0.05 (Goodman 1995) We perform a test in our bathroom to look whether the water in the filled bathtub is curved according to a globe or to a three-dimensional heart. Our test gives P = 0.98 in favour of earthlikecurvature (P(H0) < 0.05). Does this change our view about the geometry of earth? Doesthismeanthat a heart model has 2% support? Oftennullhypothesesserve as straw manonly to „support” orhypothesis (fictionaltesting) The frequentist probability level in favour of H0 that the earth is a heart P P 0.001 0.0001 0.00001 0.00000001 0.01 0.1 0 0.9 0. 5 The Bayesian probability level in favour of H0 P P The higher our initial guess about the probability of our hypothesis is, the less does any new test contribute to further evidence. Frequentist tests are not as strong as we think.

  17. Confirmatory studies A study reports that potatochips increase the risk of cancer. P < 0.01. Tests in confirmatory studies must consider prior information. Our test provides a significance level independent of prior information only if we are quite sure about the hypothesis to be tested. P(H1) = 0.99 However, previous work did not find a relationship.Thus we believe that p(H1) < 0.5. Our test returns a probability of P = (0.0 < P < 0.5) * 0.99 < 0.5 The posterior test is not as significant as we believe. Bayesian prior and conditional probabilities are often not known and have to be guessed. Frequentist inference did a good job, we have scientific progress.

  18. Bayesian inference Bayes factor, odds We have 59 heads and 41 numbers. Does this mean that head has a higher probability? The Bayes approach asks what is the probability of our model with respect to any other possible model. The frequentist approach Under Bayesian logic the observed result is only 5 times less probable than any other result. The odds for a deviation is 4.44. 1/4.44 = 0.23

  19. How to recalculate frequentist probabilities in a Bayesian framework The Bayesian factor give the odds in favour of H0 For tests approximately based on the normal distribution (Z, t, F, c2) Goodman defined the minimal Bayes factor BF as: A factor of 1/10 means that H0 is ten times less probable than H1. For large n,c2 is approximately normally distributed Z p(Z) For a hypothesis to be 100 times more probable than the alternative model we need a parametric significance level of P < 0.0024! Bayesian statisticians call for using P < 0.001 has the upper limit of significance!!

  20. Allmodelsarewrong but someareuseful. HirotugoAkaike Wiliam Ockham Occam’s razor Pluralitas non estponenda sine necessitate Any test for goodness of fit will eventually become significant if we only enlarge the number of free parameters. ThesamplesizecorrectedAkaike criterion of model choice Optimum Maximum information content Bias Explained varianceSignificance k: totalnumber of model parameters +1n: samplesize L: maximum likelihood estimate of the model Many Few Variables

  21. Maximum likelihood estimated by c2 by r2 The lower is AIC, the more parsimonious is the model We choose the model with the lowest AIC („the most useful model”).Thisisoften not the model with the lowest P-value. AIC model selection serves to find the best descriptor of observed structure. It is a hypothesis generating method. It does not test for significance. Model selection using significance levels is a hypothesis testing method. When to apply AIC: General linear modelling (regressionmodels, ANOVA, MANCOVA)Regression trees Path analysisTime series analysis Null model analysis

  22. Model selection using significance levels is a hypothesis testing method. Significance levels and AIC must not be used together. AIC should be used together with r2.

  23. Large data sets The relationship between P, r2, and sample size F-test r2=0.01 P=0.9999 P=0.95 Using an F-test at r2 = 0.01 (regression analysis) we need 385 data to get at significant result at P < 0.05. At very large sample sizes (N >> 100) classical statistical tests break down. Any statistical test will eventually become significant if we only enlarge the sample size.

  24. N = 100, one pair of zeroes and ones 100 pairs of Excel random numbers 7.5% significant correlations 3000 replicates N = 10000, 100 pairs of zeroes and ones. N = 1000, 10 pairs of zeroes and ones. 99.9% significant correlations 16% significant correlations

  25. Number of species co-occurrences in comparison to a null expectaction (data are simple random numbers) The null model relies on a randomisation of 1s and 0s in the matrix Null distribution Nobs The variance of the null space decreases due to statistical averaging. • Any test that evolves randomisation of a compound metric will eventually become significant due to the decrease of the standard error. • This reduction is due to statistical averaging. At very large sample sizes (N >> 100) classical statistical tests break down. Instead of using a predefined significancelevel use a predefined effect size or r2 level.

  26. The T-test of Wilcoxon revealed a statistically significant difference in pH of surface water between the lagg site (Sphagno-Juncetum) and the two other sites. Every statistical analysis must at least present sample sizes, effect sizes, and confidence limits. Multiple independent testing needs independent data.

  27. Pattern seeking or P-fishing Simple linear random numbers Of 12 trials four gave significant results False discovery rates (false detection error rates): The proportion of erroneously declared significances. Using the same test several times with the same data needs a Bonferroni correction. n independent tests Single test The Bonferronicorrecton is very conservative.

  28. False discovery rates (false detection error rates): The proportion of erroneously declared significances. A sequential Bonferroni correction K is the number of contrasts. • What is multiple testing? • Single analysis? • Single data set? • Single paper? • Single journal • Lifetime work? There are no established rules!

  29. A data set on health status and reproductive success of Polish storks N: 866 stork chicken K: 104 physiological and environmental variables

  30. No clearhypothesis P < 0.000001 Possibly data are non-independent due to sampling sequence P-fishing • Common practise is to screen the data for significant relationships and publish these significances. • The respective paper does not mention how many variables have been tested. • Hypotheses are constructed post factum to match the „findings”. • „Results” are discussed as if they corroborate the hypotheses. • Hypotheses must come from theory (deduction), not from the data. • Inductive hypothesis testing is critical. • If the hypotheses are intended as being a simple description, don’t use P-values. • If the data set is large • Divide the records at random into two or more parts. • Use one part for hypothesis generation, use the other parts for testing. • Use always multipletestingcorrectedcorrected significance levels. • Take care of non-independence of data. Try reduced degrees of freedom.

  31. Final guidelines Don’t mix data description, classification and hypotheses testing. Provide always sample sizes and effect sizes . If possible provide confidence limits. • Data description and model selection: • Rely on AIC, effect sizes, and r2 only. • Do not use P-values. • Check for logic and reason. • Hypothesis testing: • Be careful with hypothesis induction. Hypotheses should stem from theory not from the data. • Do not develop and test hypotheses using the same data. • Do not use significance testing without a priori defined and theory derived hypotheses. • Check for logic and reason. • Check whether results can be reproduced. • Do not develophypothesespost factum (tellingjustsostories) • Testing for simple differences and relationships: • Be careful in the interpretation of P-values. P does not provide the probability that a certain observation is true. • P does not provide the probability that the alternative observation is true. • Check for logic and reason. • Don’t use simple tests in very large data sets. Use effect sizes only. • Use predefined effect sizes and explained variances. • If possible use a Bayesian approach.

More Related