1 / 37

How to read an RCT: introduction, workshop, wrap-up

How to read an RCT: introduction, workshop, wrap-up. Martin Gallagher. Overview. Checklists CONSORT Statement & others Impt elements to consider Randomisation Powering Bias Secondary analyses Exercises Causality and importance Still going on now: ACT and HDF studies. RCT vs non-RCT.

ornice
Download Presentation

How to read an RCT: introduction, workshop, wrap-up

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How to read an RCT: introduction, workshop, wrap-up Martin Gallagher

  2. Overview • Checklists • CONSORT Statement & others • Impt elements to consider • Randomisation • Powering • Bias • Secondary analyses • Exercises • Causality and importance • Still going on now: ACT and HDF studies

  3. RCT vs non-RCT • Ioannidis, JAMA 2001 • 45 topics • 408 studies (240+168)

  4. Critical appraisal tools • CONSORT Guidelines • Consolidated Standards of Reporting Trials • Endorsed by >50% of core medical journals • Extensions for different types of trials: • Cluster randomised • Non-inferiority trials • >25 items on checklist

  5. Other appraisal tools • Centre for Evidence Based Medicine (Oxford) • 2 page document • http://www.cebm.net/index.aspx?o=1157 • Critical Appraisal Skills Program (CASP) tool • Public Health Resource Unit, NHS • http://www.sph.nhs.uk/sph-files/rct%20appraisal%20tool.pdf

  6. Cochrane risk of bias tool • Were methods for sequence generation reported? • Was there adequate allocation concealment? • Was there complete outcome data? • Was there intention to treat analysis? • Was there blinding of participants, investigators, outcomes assessors, data analysts? • Was there selective outcome reporting? • Other sources of bias? • Co-interventions • Commercial • Yes, No, Unclear

  7. Elements of appraisal • Did the study ask a clearly focussed question? • Was it an RCT and an appropriate design for the question? • Were subjects appropriately allocated to int/cont groups? • Were subjects/staff/others blinded to allocation? • Were all who entered the trail accounted for at the end? • Were all subjects followed up and measured the same way? • Did the study have enough subjects to minimise the play of chance? • How are the results presented and what is the main result? • How precise are these results? • Were all impt outcomes considered so the results can be applied?

  8. Lack of randomisation concealment Chalmers et al. N Engl J Med 1983;309:1358-61

  9. Concealment of allocation

  10. Concealment of allocation

  11. Randomisation • The play of chance • Ideally central/independent rather than local • Separate preparation of the agents • Eg: pharmacy, with numbered/coded bottles • Serially numbered, opaque, sealed envelopes • Beware: alternation, dates of arrival • Note: • Permuted blocks • Stratification • Put yourself in the place of the local PI….

  12. Outcome of effective randomisation • Table 1 • Assurance that the groups were equivalent at baseline • Accounts for both measured and (more importantly) unmeasured confounders

  13. Power • Not a part of all appraisal tools • Not many treatments have an effect size (RRR) of >30-40% • A clue to study quality • Why did they chose it? • What evidence supports such a view? • Is that consistent with my perception of risk in this population?

  14. Bias • Definition: • When an estimated measure of frequency or association differs systematically from the true value. • Random samples will differ from the true population because of random sampling variability • Bigger the sample, more proximate it is to the underlying pop. • Selection bias • Confounding • Measurement bias

  15. Selection bias • Not usually so much of an issue in RCTs • Except: • Through the treatment of missing participants • Loss to follow up • Other select exclusions (non-compliant, intolerant)

  16. Confounding • A situation in which a measure of the effect of exposure on disease is distorted because of the association of the study factor with other factors that influence the outcome. • Three criteria: • An idependent risk factor for the outcome of interest • Not an intervening variable • Unevenly distributed in study groups • In RCTs should be fixed by adequate randomisation • Look to Table 1

  17. Measurement bias • Distortion in the measure of frequency or association due to inaccuracy in measurement • Minimise in RCTs by: • Use of placebo • Keep measurements ‘blind’ to intervention • Avoid differential treatments to the study groups

  18. Blinding • Not always possible • Try to blind • Participants • Clinicians • Outcome assessment • Colorectal surgery example

  19. Secondary analyses • Should not overshadow the primary outcome • Greater validity if pre-specified • Beware • 1/20 chance of statistically significant finding by chance alone • More that are done, more likely to make a ‘significant’ finding

  20. Critical appraisal exercises

  21. Ronco et al: HDF dosing in AKI

  22. Ronco et al: HDF dosing in AKI • Very influential trial, driven the research agenda • Heavily cited • Outcome measure? • Single centre, long recruitment time, surgical patient spectrum • Outcomes of subsequent studies?

  23. SAFE Study

  24. SAFE Study • Fundamental question in ICU • Blinding of study treatment?

  25. Tepel: NAC for contrast nephropathy

  26. Tepel: NAC for contrast nephropathy • Very influential and highly cited study • How were they randomised? • Power? • Blinding? • Is it impt?

  27. Sulfinpyrazone post MI

  28. Sulfinpyrazone post MI • FDA: • We do not believe that either reported outcome can be accepted for the following reasons: • Assignment of patients often inaccurate and failed to conform to criteria set forth at the outset • Errors in assignment nearly all favoured the conclusion that sulfinpyrazone decreased sudden death • Mortality classification system had no clear logic • Reported effect upon overall mortality heavily dependent upon after-the-fact exclusion from the analysis of certain patients • The exclusions virtually all favouredsulfinpyrazone

  29. SHARP Study

  30. SHARP Study • Change in the primary outcome? • Press release: “During this long trial, the proportion of patients who stopped taking their allocated treatment was about one third, but this was not generally due to side-effects and was the same for both real and dummy treatments. If taken without interruption, however, ezetimibe plus simvastatin could have even larger effects than were seen in SHARP, potentially reducing risk by about one quarter.” How will/does it change your treatment?

  31. Renal nerve ablation

  32. Renal nerve ablation • Randomisation? • Table 1 • Blinding? • Sham operation? • Outcome assessment?

  33. Sevalemer trial

  34. Sevalemer trial • Power? • Blinding? • Bias? • What about patients over 65?

  35. But is it still an issue? • ACT • NAC trial from South America • 2308 patients, randomisation • Blinding of study treatment? • HDF Studies • CONTRAST & Turkish studies • Designed to answer one question but conclude: • “treatment with…HDF does not seem to offer a survival benefit…However, subgroup analysis suggested benefit among patients treatment with high convection volumes on all cause mortality” • “Composite for death from any cause and non-fatal CV events is not different between post-dilution on-line HDF and high flux HD. HDF treatment with substitution volume over 17.4L provides better CV and overall survival compared to HD”

More Related