1 / 36

Quality issues in policy trials

Quality issues in policy trials. Dr Carole Torgerson Reader Institute for Effective Education & Dr Amanda Perry Senior Research Fellow Centre for Criminal Justice Economics and Psychology 14 th September 2009 RCTs in the Social Sciences Conference Workshop, 2009.

wiley
Download Presentation

Quality issues in policy trials

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quality issues in policy trials Dr Carole Torgerson Reader Institute for Effective Education & Dr Amanda Perry Senior Research Fellow Centre for Criminal Justice Economics and Psychology 14th September 2009 RCTs in the Social Sciences Conference Workshop, 2009

  2. Why do we need to quality appraise trials? • In reading individual trials: • to differentiate between and within trials regarding quality issues; • In a systematic review: • To investigate whether the individual studies in the review are affected by bias. Systematic errors in a study can bias its results by overestimating or underestimating effects. Bias in an individual study or individual studies in the review can in turn bias the results of the review • To make a judgement about the weight of evidence that should be placed on each study (higher weight given to studies of higher quality in a meta-analysis) • To investigate differences in effect sizes between high an low quality studies in a meta-regression • In designing trials: • To use items of quality issues to ensure that the trial is methodologically rigorous.

  3. “A careful look at randomized experiments will make clear that they are not the gold standard. But then, nothing is. And the alternatives are usually worse.” Berk RA. (2005) Journal of Experimental Criminology 1, 417-433.

  4. Characteristics of a rigorous trial • Once randomised, all participants are included within their allocated groups. • Random allocation is undertaken by an independent third party. • Outcome data are collected blindly. • Sample size is sufficient to exclude an important difference. • A single analysis is pre-specified before data analysis.

  5. Education: comparison with health education Torgerson CJ, Torgerson DJ, Birks YF, Porthouse J. (2005) A comparison of randomised controlled trials in health and education. British Educational Research Journal,31:761-785. (based on n = 168 trials)

  6. Problems with RCTs • Failure to keep to random allocation • Attrition can introduce selection bias • Unblinded ascertainment can lead to ascertainment bias • Small samples can lead to Type II error • Multiple statistical tests can give Type I errors • Poor reporting of uncertainty (e.g., lack of confidence intervals).

  7. Independent assignment • “Randomisation by centre was conducted by personnel who were not otherwise involved in the research project” [1] • Distant assignment was used to: “protect overrides of group assignment by the staff, who might have a concern that some cases receive home visits regardless of the outcome of the assignment process”[2] [1] Cohen et al. (2005) J of Speech Language and Hearing Res. 48, 715-729. [2] Davis RG, Taylor BG. (1997) Criminology 35, 307-333.

  8. Attrition • Attrition can lead to bias; a high quality trial will have maximal follow-up after allocation. • It can be difficult to ascertain the amount of attrition and whether or not attrition rates are comparable between groups. • A good trial reports low attrition with no between group differences. • Rule of thumb: 0-5%, not likely to be a problem. 6% to 20%, worrying, > 20% selection bias.

  9. Concealed allocation – why is it important? • Good evidence from multiple sources shows that effect sizes for RCTs where randomisation was not independently conducted were larger compared with RCTs that used independent assignment methods. • A wealth of evidence is available that indicates that unless random assignment was undertaken by an independent third party, then subversion of the allocation may have occurred (leading to selection bias and exaggeration of any differences between the groups). 9

  10. Allocation concealment: a meta-analysis • Schulz and colleagues took a database of 250 randomised trials in the field of pregnancy and child birth. • The trials were divided into 3 groups with respect to concealment: • Good concealment (difficult to subvert); • Unknown (not enough detail in paper); • Poor (e.g., randomisation list on a public notice board). • They found exaggerated effect sizes for poorly concealed compared with well concealed randomisation. 10

  11. Comparison of adequate, unclear and inadequate concealment Schulz et al. JAMA 1995;273:408. 11

  12. Summary of assignment and concealment • Judge whether RCT or quasi-experiment or other. • Increasing evidence to suggest that subversion of random allocation is a problem in randomised trials. The ‘gold-standard’ method of ‘random’ allocation is the use of a secure third party method. • Judge whether or not the trial reports that an independent method of allocation was used. Poor quality trials: use sealed envelopes; do not specify allocation method; or use allocation methods within the control of the researcher (e.g., tossing a coin). • Judge whether there are assignment discrepancies, e.g. failure to keep to random allocation 12

  13. Other design issues • Unblinded ascertainment (outcome measurement) can lead to ascertainment bias • Small samples can lead to Type II error (concluding there is no difference when there is a difference) • Attrition (drop-out) can introduce selection bias • Multiple statistical tests can give Type I errors (concluding there is a difference when this is due to chance) • Poor reporting of uncertainty (e.g., lack of confidence intervals).

  14. Blinding of participants and investigators • Participants can be blinded to: • Research hypotheses • Nature of the control or experimental condition • Whether or not they are taking part in a trial • This may help to reduce bias from resentful demoralisation • Investigators should be blinded (if possible) to follow-up tests as this eliminates ‘ascertainment’ bias. This is where consciously or unconsciously investigators ascribe a better outcome than is the truth based on the knowledge of the assigned groups. 14

  15. Blinding of Outcome Assessment • Ascertainment bias can result when the assessor is not blind to group assignment, e.g., homeopathy study of histamine showed an effect when researchers were not blind to the assignment but no effect when they were. • Example of outcome assessment blinding: Study “was implemented with blind assessment of outcome by qualified speech language pathologists who were not otherwise involved in the project”[1] [1] Cohen et al. (2005) J of Speech Language and Hearing Res. 48, 715-729.

  16. Blindingof outcome assessment • Judge whether or not post-tests were administered by someone who is unaware of the group allocation. Ascertainment bias can result when the assessor is not blind to group assignment, e.g., homeopathy study of histamine showed an effect when researchers were not blind to the assignment but no effect when they were. • Example of outcome assessment blinding: Study “was implemented with blind assessment of outcome by qualified speech language pathologists who were not otherwise involved in the project” Cohen et al. (2005) J of Speech Language and Hearing Res. 48, 715-729. 16

  17. Intention to treat (ITT) • Randomisation ensures the abolition of selection bias at baseline; after randomisation some participants may cross over into the opposite treatment group (e.g., fail to take allocated treatment or obtain experimental intervention elsewhere). • There is often a temptation by trialists to analyse the groups as treated rather than as randomised. • This is incorrect and can introduce selection bias. 17

  18. ITT analysis: examples • Seven participants allocated to the control condition (1.6%) received the intervention, whilst 65 allocated to the intervention failed to receive treatment (15%). The authors, however, analysed by randomised group - CORRECT approach. • “It was found in each sample that approximately 86% of the students with access to reading supports used them. Therefore, one-way ANOVAs were computed for each school sample, comparing this subsample with subjects who did not have access to reading supports.” -INCORRECT Davis RG, Taylor BG. (1997) Criminology 35, 307-333. Feldman SC, Fish MC. (1991) Journal of Educational Computing Research 7, 25-36. .

  19. Statistical power • Few effective educational interventions produce large effect sizes especially when the comparator group is an ‘active’ intervention. In a tightly controlled setting 0.5 of a standard deviation difference at post-test is good. Smaller effect sizes in field trials are to be expected (e.g. 0.25). To detect 0.5 of an effect size with 80% power (sig = 0.05), we need 128 participants for an individually randomised experiment.

  20. Survey of trial quality Torgerson CJ, Torgerson DJ, Birks YF, Porthouse J. (2005) A comparison of randomised controlled trials in health and education. British Educational Research Journal,31:761-785. (based on n = 168 trials) 20

  21. CONSORT • Because the majority of health care trials were badly reported, a group of health care trial methodologists developed the CONSORT statement, which indicates key methodological items that must be reported in a trial report. • This has now been adopted by all major medical journals and some psychology journals. 21

  22. The CONSORT guidelines, adapted for trials in educational research • Was the target sample size adequately determined? • Was intention to teach analysis used? (i.e. were all children who were randomised included in the follow-up and analysis?) • Were the participants allocated using random number tables, coin flip, computer generation? • Was the randomisation process concealed from the investigators? (i.e. were the researchers who were recruiting children to the trial blind to the child’s allocation until after that child had been included in the trial?) • Were follow-up measures administered blind? (i.e. were the researchers who administered the outcome measures blind to treatment allocation?) • Was precision of effect size estimated (confidence intervals)? • Were summary data presented in sufficient detail to permit alternative analyses or replication? • Was the discussion of the study findings consistent with the data?

  23. Flow Diagram • In health care trials reported in the main medical journals authors are required to produce a CONSORT flow diagram.

  24. Revision of the CONSORT Statement for Non-Pharmacological Trials • To what extent has the CONSORT Statement been adapted for use with trials other than medical trials? • NPT – Adoption of the CONSORT statement with Non-pharmacological trials: • What do we mean by Non Pharmacological trials? • Evaluations of surgery • Use of technical devises • Therapy

  25. NPT Checklist • Includes a ten item checklist • Includes issues relating to: • The standardisation of the intervention • Care provider influences • Measures to minimise bias from a lack of blinding (Boutron et al., 2005, Journal of Clinical Epidemiology, 58 (12), 1233-12240. )

  26. Other applications of the CONSORT Statement in Criminal Justice • Historically RCTs have been conducted since the 1930s in criminal justice • Numerous reports of a lack of information • Emergence of meta-analysis techniques • A requirement to compare like with like studies

  27. Reporting transparency or descriptive validity (continued) • Checklist of standardised items • [see Farrington, 2003] • Application of the CONSORT Statement (Perry & Johnson, 2008; Journal of Experimental Criminology, 4, 165-185) (Perry, Weisburd & Hewitt, in press, Journal of Experimental Criminology)

  28. Allocation Concealment Deschenes, E.P., Turner, S., & Petersilia, J. (1995). A dual experiment in intensive community supervision: Minnesota’s prison diversion and enhanced supervised release programs. Prison Journal, 75, 330-356.

  29. Comparing criminal justice trials and health care trials: Allocation concealment

  30. Examples of Blinding Lao, L., Bergman, S., Hamilton, G.R., Langenberg, P., Berman, B. (1999). Evaluation of acupuncture for pain control after oral surgery. Archives of Head and Neck Surgery, 125, 567-572. Latessa, E.J. & Moon, M.M. (1992). The effectiveness of acupuncture in an outpatient drug treatment program. Journal of Contemporary Criminal Justice, 8, 371-331.

  31. Comparing criminal justice trials and health care trials: Blinding

  32. Examples of Intention to Treat Campbell, F.A., Ramey, C.T., Pungello, E., Sparling, J. & Miller-Johnson, S. (2002). Early childhood education: Young adult outcomes from the Abercedarian project. Applied Developmental Science, 6, 42-57.

  33. Examples of Statistical power Morrell CJ, Spiby H, Stewart P, Walters S, Morgan A. (2000). Costs and effectiveness of community postnatal support workers: randomised controlled trial. British Medical Journal, 321, 593-8.

  34. Summary • Requirement to improve the endorsement of quality assessment tools in the social sciences • Crucial to report information in a transparent manner • Evidence suggests without this bias may be introduced into: • Study outcomes • Meta analyses used to combine studies

  35. Summary • No RCT is perfect • BUT • Quality tools will aid the interpretation of evidence and reporting of results for: • Researchers • Policy makers • Reviewers

More Related