1 / 54

Sources of Bias in Randomised Controlled Trials

Sources of Bias in Randomised Controlled Trials. REMEMBER. Randomised Trials are the BEST way of establishing effectiveness. All RCTs are NOT the same. Although the RCT is rightly regarded as the premier research method, by the cognoscenti, some trials are better than others.

vine
Download Presentation

Sources of Bias in Randomised Controlled Trials

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sources of Bias in Randomised Controlled Trials

  2. REMEMBER Randomised Trials are the BEST way of establishing effectiveness.

  3. All RCTs are NOT the same. • Although the RCT is rightly regarded as the premier research method, by the cognoscenti, some trials are better than others. • In this lecture we will look at sources of bias in trials and how these can be avoided.

  4. Selection Bias - A reminder • Selection bias is one of the main threats to the internal validity of an experiment. • Selection bias occurs when participants are SELECTED for an intervention on the basis of a variable that is associated with outcome. • Randomisation or other similar methods abolishes selection bias.

  5. After Randomisation • Once we have randomised participants we eliminate selection bias but the validity of the experiment can be threatened by other forms of bias, which we must guard against.

  6. Forms of Bias • Subversion Bias • Technical Bias • Attrition Bias • Consent Bias • Ascertainment Bias • Dilution Bias • Recruitment Bias

  7. Bias (cont) • Resentful demoralisation • Delay Bias • Chance Bias • Hawthorne effect • Analytical Bias.

  8. Subversion Bias • Subversion Bias occurs when a researcher or clinician manipulates participant recruitment such that groups formed at baseline are NOT equivalent. • Anecdotal, or qualitative evidence (I.e gossip), suggest that this is a widespread phenomenon. • Statistically this has been demonstrated as having occurred widely.

  9. Subversion - qualitative evidence • Schulz has described, anecdotally, a number of incidents of researchers subverting allocation by looking at sealed envelopes through x-ray lights. • Researchers have confessed to breaking open filing cabinets to obtain the randomisation code. Schulz JAMA 1995;274:1456.

  10. Quantitative Evidence • Trials with adequate concealed allocation show different effect sizes, which would not happen if allocation wasn’t being subverted. • Trials using simple randomisation are too equivalent for it to have occurred by chance.

  11. Poor concealment • Schulz et al. Examined 250 RCTs and classified them into having adequate concealment (where subversion was difficult), unclear, or inadequate where subversion was able to take place. • They found that badly concealed allocation led to increased effect sizes – showing CHEATING by researchers.

  12. Comparison of concealment Schulz et al. JAMA 1995;273:408.

  13. Small VS Large Trials • Small trials tend to give greater effect sizes than large trials, this shouldn’t happen. • Kjaergard et al, showed it was due to poor allocation concealment in small trials, when trials are grouped by allocation methods ‘secure’ allocation reduced effect by 51%. Kjaegard et al. Ann Intern Med 2001;135:982.

  14. Case Study • Subversion is rarely reported for individual studies. • One study where it has been reported was for a large, multicentred surgical trial. • Participants were being randomised to 5+ centres using sealed envelopes.

  15. Case study cont • Subversion was detected and the trial changed to telephone allocation system.

  16. Case-study (cont) • After several hundred participants had been allocated the study statistician noticed that there was an imbalance in age. • This age imbalance was occurring in 3 out of the 5 centres. • Independently 3 clinical researchers were subverting the allocation

  17. Mean ages of groups

  18. Example of Subversion

  19. Using Telephone Allocation

  20. Subversion - summary • Appears to be widespread. • Secure allocation usually prevents this form of bias. • Need not be too expensive. • Essential to prevent cheating.

  21. Secure allocation • Can be achieved using telephone allocation from a dedicated unit. • Can be achieved using independent person to undertake allocation.

  22. Technical Bias • This occurs when the allocation system breaks down often due a computer fault. • A great example is the COMET I trial (COMET II was done because COMET 1 suffered bias).

  23. COMET 1 • A trial of two types of epidural anaesthetics for women in labour. • The trial was using MIMINISATION via a computer programme. • The groups were minimised on age of mother and her ethnicity. • Programme had a fault. COMET Lancet 2001;358:19.

  24. COMET 1 – Technical Bias

  25. COMET II • This new study had to be undertaken and another 1000 women recruited and randomised. • LESSON – Always check the balance of your groups as you go along if computer allocation is being used.

  26. Attrition Bias • Usually most trials lose participants after randomisation. This can cause bias, particularly if attrition differs between groups. • If a treatment has side-effects this may make drop outs higher among the less well participants, which can make a treatment appear to be effective when it is not.

  27. Attrition Bias • We can avoid some of the problems with attrition bias by using Intention to Treat Analysis, where we keep as many of the patients in the study as possible even if they are no long ‘on treatment’.

  28. Sensitivity analysis • Analysis of trial results can be subjected to a sensitivity analysis whereby those who drop out in one arm are assumed to have the worst possible outcome, whilst those who drop out in the parallel arm are assumed to have the best possible outcome. If the findings are the same we are reassured.

  29. Consent Bias • This occurs when consent to take part in the trial occurs AFTER randomisation. • Most frequent danger in Cluster trials. • For example, Graham et al, randomised schools to a teaching package for emergency contraception. More children took part in the intervention than the control. Graham et al. BMJ 2002;324:1179.

  30. Consent bias?

  31. Consent Bias? • Because more children consented in the intervention group we would expect their knowledge to be less (as we include children less likely to know). • Conversely we get a volunteer or consent effect with the intervention group only those most knowledgeable agreeing to take part.

  32. Ascertainment Bias • This occurs when the person reporting the outcome can be biased. • A particular problem when outcomes are not ‘objective’ and there is uncertainty as to whether an event has occurred.

  33. Example. • A group of student’s essays were randomly assigned photographs purporting to be the student. The photos were of people judged to be “attractive” “average” “below average”. The average mark was significantly HIGHER for the average looking student. • Why? Markers were biased into marking higher for students whom they believed were average looking (like themselves).

  34. Another example • Use of homeopathic dilution of histamine was shown in a RCT of cell cultures to have significant effects on cell motility. • Ascertainment was not blind. • Study repeated with assessors blind to which petri dish had distilled water or which had had homeopathic dilutions of histamine. Effect, like snow in Arabian Desert, disappeared.

  35. Dilution Bias • This occurs when the intervention or control group get the opposite treatment. This affects all trials where there is non-adherence to the intervention. • For example, in a trial of calcium and vitamin D about 4% of the controls are getting the treatment and 35% of the intervention group stop taking their treatment. This will ‘dilute’ any apparent treatment effect.

  36. Effect of dilution bias

  37. Sources of dilution • Calcium and D trial controls buying calcium supplements or intervention patients not taking them. • Hip protector trial control patients MAKING their own padded knickers from bubble wrap, intervention patients not wearing them.

  38. Dilution Bias • This can be partly prevented by refusing access to the experimental treatment for the controls. • Will always be a problem for active treatment seeking control therapy.

  39. Resentful Demoralisation • This can occur when participants are randomised to treatment they do not want. • This may lead to them reporting outcomes badly in ‘revenge’. • This can lead to bias.

  40. Resentful Demoralisation • One solution is to use a patient preference design where only participants who are ‘indifferent’ to the treatment they receive are allocated. • This should remove its effects.

  41. Hawthorne Effect • This is an effect that occurs by being part of the study rather than the treatment. Interventions that require more TLC than controls could show an effect due to the TLC than the drug or surgical procedure. • Placebos largely eliminate this or TLC should be given to controls as well.

  42. Delay bias • This can occur if there is a delay between randomisation and the intervention. • In the GRIT trial of early delivery some women allocated to immediate delivery were delayed. This will dilute the effects of treatment.

  43. Delay bias • Similarly in Calcium and D trial delay of months between allocation and receipt of treatment. • This can be dealt with, sometimes by starting analysis for active and controls from time of treatment received.

  44. Chance Bias • By chance groups can be uneven in important variables due to chance. • This can be reduced by stratification or possibly better using ANCOVA. • Stratification of course can lead to TECHNICAL or SUBVERSION bias

  45. Analytical Bias • Once a trial has been completed and data gathered in it is still possible to arrive at the wrong conclusions by analysing the data incorrectly. • Most IMPORTANT is ITT. • Also inappropriate sub-group analyses is a common practice.

  46. Intention To Treat • Main analysis of data must be by groups as randomised. Per protocol or active treatment analysis can lead to a biased result. • Those patients not taking the full treatment are usually quite different to those that are and restricting the analysis can lead to bias.

  47. Sub-Group Analyses • Once the main analysis has been completed it is tempting to look to see if the effect differs by group. • Is treatment more or less effective in women? • Is it better or worse among older people? • Is treatment better among people at greater risk?

  48. Sub-Groups • All of these are legitimate questions. The problem is the more subgroups one looks at the greater is the chance of finding a spurious effect. • Sample size estimations and statistical tests are based on 1 comparison only.

  49. Sub-Group and example. • In a large RCT of asprin for myocardial infarction a sub-group analysis showed that people with the star signs Gemini and Libra asprin was INEFFECTIVE. • This is complete NONSENSE! • This shows dangers of subgroup analyses. Lancet 1988;ii:349-60.

  50. More Seriously • Sub group analyses led to: • The wrong finding that tamoxifen was ineffective among women < 50 years; • Streptokinase was ineffective > 6 hours after MI. • Asprin for secondary prevention in women is ineffective. • Antihypertensive treatment for primary prevention in women is ineffective. • Beta-blockers ineffective in older people. • And so on……

More Related