1 / 26

THE RISK OF RANDOM ERROR (PLAY OF CHANCE)

THE RISK OF RANDOM ERROR (PLAY OF CHANCE). Goran Poropat. Introduction. No physical quantity can be measured with perfect certainty All measurements are prone to errors. Experimental errors. Do not refer to mistakes, blunders, or miscalculations

hue
Download Presentation

THE RISK OF RANDOM ERROR (PLAY OF CHANCE)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. THE RISK OF RANDOM ERROR (PLAY OF CHANCE) Goran Poropat

  2. Introduction No physical quantity can be measured with perfect certainty All measurements are prone to errors

  3. Experimental errors Do not refer to mistakes, blunders, or miscalculations (eg. measuring a width when the lenght should have been measured) Inherent in the measurement process

  4. Experimental errors Are measured by • Accuracy – how close a measured value is to the true value or accepted value • Precision – how closely two or more measurements agree with each other (repeatability, reproducibility)

  5. Experimental errors Three dimensions particularly influence the reliability of our observations in clinical research: • RANDOM ERRORS (PLAY OF CHANCE) • SYSTEMATIC ERRORS (BIAS) • DESIGN ERRORS

  6. Bias A systematic error – deviation from the truth, in results or inferences Overestimation or underestimation of the true intervention effect

  7. Bias Affect accuracy of a measurement Should not be confused with imprecision Multilpe replications of the same study – wrong answer on average

  8. Random error Imprecision – refers to random error The unpredictable variation between observed values and some “true” value Possible reason of misleading results in RCTs and meta-analyses

  9. Random error Affects precision of measurement Multilpe replications of the same study Different effect estimates SAMPLING VARIABILITY

  10. Sampling variability The actual study result will vary depending on who is actually in the study sample A sample – a subset of a population of manageable size

  11. Epidemiological studies Impossible to evaluate every member of the entire population The relationship between exposure and health-related event is judged from observations on a SAMPLE of the population STATISTICAL INFERENCES (extrapolations)

  12. Sampling variability 50 x x 50 N = 4 2 x x 4 2 x

  13. Sampling variability Different inferences – various possible samples Hypothesis Information size Probability of drawing a bad sample Random errors tend to decrease as information size increases

  14. Sampling variability RR Clinicallyimportant overestimate! q 1 Required sample size Number of patients randomised

  15. Epidemiological studies The measure of association we observe (inference) in our data may differ from the “true” measure of association - by chance alone Probability that the observed difference is due to play of chance

  16. Hypothesis testing Quantify the degree to which sampling variability (chance) can explain the observed association Assume H0 is true, and not Ha The probability of obtaining an observed effect (or larger) under a null-hypothesis Assessing H0 P-value

  17. P-value The likelihood of observing certain data given that the null-hypothesis is true P-value threshold = 0.05 – arbitrary Data yielding a P-value = 0.05 – a 5% chance obtaining the observed result, if no real effect exists

  18. P-value A P-value is the probability of an observed (or more extreme) result arising by chance

  19. Misinterpretations P>0.05 “the intervention has no effect” “not strong evidence that the intervention has an effect” “ an intervention has a strong benefit” The P value addresses the question of whether the intervention effect is precisely nil P<0.05

  20. Confidence intervals Another approach to quantify sampling variability Range within which the true magnitude of effect lies with a stated probability, or a certain degree of assurance (usually 95%)

  21. Confidence intervals Point estimate – the actual measure of association given by the data (OR, RR, RD) The best guess of the magnitude and direction of the experimental intervention’s effect compared with the control intervention

  22. Confidence intervals Wider intervals – greater imprecision CI width • Sample size • Precision of individual study estimates • Number of studies combined

  23. Confidence intervals P-value – to which extent the null-hypothesis is compatible with the data CI – the range of hypothesis compatible with the data

  24. Sum up • Random error (due to ’play of chance’) is the unpredictable variation between observed values and some ’true’ value • Everything we attempt to estimate may be subject to some degree of random error

  25. Sum up • Random error affects • Statistical significance • Estimated treatment effects • Heterogeneity estimates • Only a sufficient number of trials and patients will ensure an acceptable risk of random error

  26. Thank you for your attention!

More Related