1 / 46

Introduction to Critical Appraisal

Introduction to Critical Appraisal. Debra Thornton Knowledge and Library Services Manager Blackpool, Fylde and Wyre Hospitals NHS Trust. Objectives. Introduction to critical appraisal Definition, differences, strengths and weaknesses of systematic reviews and meta-analyses

glennis
Download Presentation

Introduction to Critical Appraisal

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Critical Appraisal Debra Thornton Knowledge and Library Services Manager Blackpool, Fylde and Wyre Hospitals NHS Trust

  2. Objectives • Introduction to critical appraisal • Definition, differences, strengths and weaknesses of systematic reviews and meta-analyses • Sources of systematic reviews/meta-analyses • Levels of Evidence • Interpretation of basic statistics in meta-analyses – confidence intervals, forest plots • Critical appraisal of systematic reviews/meta-analyses

  3. What is critical appraisal? • Balanced assessment of the benefits/strengths and flaws/weaknesses of a study • Assessment of research process and results • Consideration of quantitative and qualitative aspects

  4. Critical appraisal is not • Negative dismissal of any piece of research • Assessment on results alone • Based entirely on statistical analysis • Undertaken by experts only

  5. Why critically appraise? • To find out the validity of the study • are the methods robust? • To find out the reliability of the study • what are the results and are they credible? • To find out the applicability of the study • is it important enough to change my practice?

  6. How do I critically appraise the research? • Believe everything • Believe papers in high quality journals • Read & decide yourself • Let other people read and decide for you • Read for yourself and make a structured appraisal

  7. What do I need to know? • Awareness of study designs • Levels of evidence • Statistics!! • CA checklists • CA resources

  8. What is a systematic review? • A review that has been prepared using some kind of systematic approach to minimising biases and random errors, and that the components of the approach will be documented in a materials and methods section Chalmers et al, 1995

  9. What is a systematic review Reviews Systematic reviews

  10. Rationale for systematic reviews • Information overload • Publication bias • Poor quality of reviews • Vitamin C and the prevention of the common cold (Pauling 1986) • Missing link • Inhalation of hexamethonium (comment by Clark et al, 2001)

  11. Sources of systematic reviews • The Cochrane Library • www.library.nhs.uk • DARE (in Cochrane Library ‘Other reviews’) • Health Technology Assessments (in Cochrane Library ‘Technology Assessments’) • Medline, Cinahl, Embase search on ‘systematic review’ in title, abstract • PubMed – Systematic Review in Limits > Topic • TRIP • www.tripdatabase.com

  12. Format of a systematic review • Formulation of a review question • Define inclusion/exclusion criteria • Locate studies • Select studies (inclusion/exclusion) • Assess study quality • Data extraction • Analyse and present results • Interpretation of results Egger et al, 2001

  13. Formulation of review question • Is the question focused in terms of • Population studied • Intervention/exposure given • Outcomes considered • Do anticoagulants prevent strokes in patients with atrial fibrillation?

  14. Define inclusion/exclusion criteria • Were the right types of studies included to answer the question? Depends on the question. • Can have observational studies (cohort, case-control), diagnostic/screening tests, prognostic, non-randomised trials • Studies should be defined according to their design, participant characteristics, interventions and outcomes

  15. Locate studies • Comprehensive search • Databases • Conference proceedings • Hand searching • Grey literature (reports, research registers) • Foreign language • Follow-up references • Contacting experts/authors • Publication bias – unpublished studies • Explicit

  16. Select and Assess Studies • Eligibility criteria for study selection can be applied • More than one reviewer can help reduce bias • Checklists/scoring systems

  17. What do the findings mean? • Effect measures – odds ratios, relative risk, mean difference • P-values • Confidence intervals

  18. Using statistics • Assess the weight of the evidence that a treatment works (or doesn’t) • Give an estimate (and likely range) of the treatment effect • Test to see how likely it is that this effect would have been seen by chance

  19. Odds ratio (OR) • Expresses the odds of having an event compared with not having an event in two different groups OR = odds in the treated group / odds in the control group

  20. OR=1 treatment has identical effect to control • OR<1 event is less likely to happen than not (i.e. the treatment reduces the chance of having the event) • OR>1 event is more likely to happen than not (increases the chances of having the event) • Clinical trials typically look for treatments which reduce event rates, and which have odds ratios of less than one

  21. Importance of defining the outcome

  22. P-values – significance test • A p-value is a measure of statistical significance which tells us the probability of an event occurring due to chance alone • P-value results range from 0 to 1 • The closer the p-value is to zero, the less chance there is that the effects of two interventions are the same

  23. Statistical significance • In general, p-values of either 0.05 or 0.01 are used as a cut-off value, although this value is arbitrary • Results larger than the cut-off are considered likely to attribute the event to chance, while results smaller than the cut-off value are likely to have occurred because of a real explanation (i.e. the result is less likely due to chance) • P-value of <0.05 indicates the result is unlikely to be due to chance, • P-value of >0.05 indicates the result might have occurred by chance.

  24. Significant at cut-off? • P=0.045 • P<0.001 • P>0.5

  25. Be careful… • A p-value in the non-significant range tells you that either there is no difference between the groups or there were too few subjects to demonstrate such a difference (ideally need to report confidence intervals) • There is not much difference between p=0.049 and p=0.051 • P-values do not indicate the magnitude of the observed difference between treatments that is needed to determine the clinical significance

  26. Interpretation of Confidence Intervals • Confidence interval is the range within which we have a measure of certainty that the true population value lies OR • The confidence interval around a result obtained from a study sample (point estimate) indicates the range of values within which there is a specific certainty (usually 95%) that the true population value for that result lies. (MeReC Briefing 2005)

  27. What can a CI tell us? • Tells us whether the result is significant or not • The width of the interval indicates precision. Wider intervals suggest less precision • Shows whether the strength of the evidence is strong or weak. • The general confidence level is 95%. Therefore, the 95% CI is the range within which we are 95% certain that the true population value lies

  28. Confidence Intervals reported on Ratios (odds ratio, etc) • The ‘line of no effect’ centres around 1 • If a CI for an RR or OR includes 1 (the line of no effect) then we are unable to demonstrate statistically significant difference between the two groups

  29. CIs around an Odds Ratio Trial to examine the effect of probiotics on the risk of antibiotic associated diarrhoea D'Souza, A. L et al. BMJ 2002;324:1361

  30. What is a meta-analysis? • A statistical analysis of the results from independent studies, which generally aims to produce a single estimate of the treatment effect Egger et al, 2001

  31. Interpretation of forest plots • Look at the title of the forest plot, the intervention, outcome effect measure of the investigation and the scale • The names on the left are the authors of the primary studies included in the MA • The small squares represent the results of the individual trial results • The size of each square represents the weight given to each study in the meta-analysis • The horizontal lines associated with each square represent the confidence interval associated with each result • The vertical line represents the line of no effect, i.e. where there is no statistically significant difference between the treatment/intervention group and the control group • The pooled analysis is given a diamond shape. The horizontal width of the diamond is the confidence interval

  32. Effect of probiotics on the risk of antibiotic associated diarrhoea D'Souza, A. L et al. BMJ 2002;324:1361

  33. The label tells you what the comparison and outcome of interest are Effect of probiotics on the risk of antibiotic associated diarrhoea

  34. Scale measuring treatment effect. Take care when reading labels! Effect of probiotics on the risk of antibiotic associated diarrhoea

  35. Each study has an ID (author) Effect of probiotics on the risk of antibiotic associated diarrhoea

  36. Treatment effect sizes for each study (plus 95% CI) Effect of probiotics on the risk of antibiotic associated diarrhoea

  37. Horizontal lines are confidence intervals Diamond shape is pooled effectHorizontal width of diamond is confidence interval Effect of probiotics on the risk of antibiotic associated diarrhoea

  38. The vertical line in middle is the line of no effectFor ratios this is 1, for means this is 0 Effect of probiotics on the risk of antibiotic associated diarrhoea

  39. Rationale for meta-analysis Conventional and cumulative meta-analysis of 33 trials of intravenous streptokinase for acute myocardial infarction. Mulrow, C D BMJ 1994;309:597-599

  40. Advantages of a systematic review/meta-analysis • Limits bias in identifying and excluding studies • Objective • Good quality evidence, more reliable and accurate conclusions • Added power by synthesising individual study results • Control over the volume of literature

  41. Drawbacks to systematic reviews/meta-analyses • Can be done badly • 2 systematic reviews on same topic can have different conclusions • Inappropriate aggregation of studies • A meta-analysis is only as good as the papers included • Tend to look at ‘broad questions’ that may not be immediately applicable to individual patients

  42. Conclusion • Critical appraisal of systematic reviews and other research is well within your capabilities • Use a recognised checklist (eg CASP) • Update your literature searching skills regularly (contact your library skills trainer)

  43. Acknowledgement • With thanks to Michelle Maden ,Clinical Information Specialist, University Hospital Aintree

  44. Critical appraisal checklists • CASP (Critical Skills Appraisal Programme) • http://www.phru.nhs.uk/casp/critical_appraisal_tools.htm • JAMA Users’ Guides to the Medical Literature • http://www.cche.net/usersguides/main.asp • Crombie I (1996) The Pocket Guide to Critical Appraisal, BMJ Books, London • Greenhalgh T (2001) How to Read a Paper, BMJ Books, London • BestBETs CA database • http://www.bestbets.org/cgi-bin/browse.pl?~show=appraisal

  45. References • Systematic reviews in health care [electronic resource] : meta-analysis in context / edited by Matthias Egger, George Davey Smith, and Douglas G. Altman. BMJ Books 2001 (ebook) • What is a systematic review?, What is a meta-analysis?, What are confidence intervals? • http://www.evidence-based-medicine.co.uk/what_is_series.html • Understanding systematic reviews and meta-analysis. Akonberg AK. Archives of Disease in Childhood 2005;90:845-848.

  46. References • Cochrane Open Learning Material: Systematic Reviews and Meta-analyses (useful Forest Plot interpretation PDF) • http://www.cochrane-net.org/openlearning/HTML/mod3-2.htm • Funnel plots • Bias in meta-analysis detected by a simple, graphical test. Egger M, et al BMJ 1997 (315):629-634 • The case of the misleading funnel plot. Lau J, et al. BMJ 2006 (333):597-600 • Heterogeneity • What is heterogeneity and is it important? Fletcher J BMJ 2007;334:94-6

More Related