slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Clinical trials and pitfalls in planning a research project PowerPoint Presentation
Download Presentation
Clinical trials and pitfalls in planning a research project

Loading in 2 Seconds...

play fullscreen
1 / 24

Clinical trials and pitfalls in planning a research project - PowerPoint PPT Presentation


  • 97 Views
  • Uploaded on

Clinical trials and pitfalls in planning a research project. Dr. D. W. Green Consultant Anaesthetist King's College Hospital Denmark Hill London SE5 9RS with grateful thanks to Professor Alan Aitkenhead. Seven deadly scientific sins. insufficient information poor research

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Clinical trials and pitfalls in planning a research project' - aelwen


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Clinical trials and pitfalls in planning a research project

Dr. D. W. Green

Consultant Anaesthetist

King's College Hospital

Denmark Hill

London SE5 9RS

with grateful thanks to Professor Alan Aitkenhead

slide2

Seven deadly scientific sins

  • insufficient information
  • poor research
  • inadequate sample size
    • no power analysis
  • no confidence intervals
  • biased
  • confounding factors e.g. mixed sexes for PONV
  • vague end points
  • e.g. not clearly defined severity of pain
  • straying from hypothesis
slide3

New Drugs: Types of study

    • Laboratory structure/activity analysis
  • Animal does it work in animals ? is it toxic ?
  • Human volunteers
      • Phase 1 .... Is it toxic ?
      • Phase 2 .... Does it work ?
      • Phase 3 .... Does it work better than existing drugs ?
      • Phase 4 .... Post marketing surveillance
  • What's it like in the real world ?
slide4

Background

  • has it been done before?
  • is it worth doing?
      • clinical scientific essential step
  • has anything similar been done before?
  • methods used by others?
slide5

Protocol

  • Introduction
    • background information
    • justification
    • why, what gap will it fill, what benefits
    • succinct
    • don’t miss out relevant info
slide6

Methodology: Ethics and consent

  • Crucial
  • Declaration of Helsinki
  • benefit to patients
  • benefit to society
  • Information to patients
  • purpose, what it involves
  • potential benefits, ability to withdraw
  • risks and disadvantages without prejudice
  • children and incompetent adults
slide7

Selection of patients

  • Age
    • efficacy and current disease
  • ASA status
  • Sex
    • pharmacokinetics, dynamics
    • e.g. PONV
  • Type of surgery
    • applicability and availability
  • Ability to give consent e.g. ICU
  • Pregnancy
slide8

Designs

  • prospective vs retrospective
  • open vs blind (double or single)
  • randomisation
  • acceptable methods eg envelopes opened after entering the trial
  • use of placebo
  • ethics and other treatments
  • block design
  • blocks of patients:
  • analyse after each block to enable one to stop when results are available
  • stratification
  • sequential analysis
slide9

Pitfalls

  • Funding salaries drugs, equipment and investigations e.g. NHS costs
  • statistics and data collection design
  • time …. how long do we go on for?
  • negative result … do (should) we publish?
  • contradictory results vs other studies
  • statistical and clinical effects
  • rival investigators
slide10

Assessment and measurements

  • which techniques
  • validity, accuracy, objective, analysis
  • which observer
  • blinded, nurses, how many make measurement, are they trained
  • how often
  • science, statistics, practicality over long periods, placebo effect of frequent assessments
  • number of variables, fewer the better
  • availability of test e.g. troponin T
slide11

Documentation

  • Ethics committee approval
  • patient information
  • data collection forms
  • data type, storage, security, confidentiality,safety
  • consent forms
disproving the null hypothesis
Disproving the null hypothesis
  • The ‘null’ hypothesis is that there is no difference between the treatments
  • a probability value ‘p’ tells you how often the difference between the treatments could have occurred by chance.
  • p < 0.05 is 1 in 20 or less (statistically significant)
  • p < 0.01 is 1 in 100 or less (highly statistically significant)
disproving the null hypothesis1
Disproving the null hypothesis
  • Type I error is where a difference is shown which could have occurred by chance
  • 1 in 20 trials will show a difference where none exists if ‘p’ is reported at the 0.05 level
  • multiple subgroup analysis in a trial may also give subgroup treatment differences
  • a statistically significant result is more likely to be reported!
disproving the null hypothesis2
Disproving the null hypothesis
  • Type II error is showing no difference where one actually exists
  • almost always due to insufficient numbers
  • can mask beneficial treatment effects
  • BUT! if trial is large enough it may produce a statistically significant effect where the clinical significance is marginal
slide15

Size of study

  • Power of study to show a difference in Rx
    • (e.g. 70% chance of demonstrating a 15% difference with a p < 0.05))
  • able disprove the null hypotheses with minimal or no Type II error
  • may require pilot to determine treatment differences
  • requires large numbers if differences are small or if great variability in treatment outcomes
  • lower power (smaller numbers) may be acceptable if outcome is important (e.g. leukaemia)
assessment of population size
Assessment of population size

15% of patients die within one year of admission to hospital for suspected myocardial infarction. Preventing 1/3rd of these deaths would be a major advance. Roughly, how many patients are needed for a clinical trial if doctors want to be 90% sure that a difference between treatments as large as the prevention of 1/3rd of deaths will not be missed at the p < 0.05 level?

presentation of results
Presentation of results
  • Significance: clinical versus statistical
  • p values
  • confidence intervals (95%) (+/- 2 SE)
  • risk reduction (relative and absolute)
  • numbers needed to treat
  • odds ratios
measures of risk reduction
Measures of risk reduction
  • Relative risk reduction …. Is it meaningful?
  • Headline “50% reduction in mortality”
    • if normal mortality is 50/100 this is great (25)
    • if normal mortality is 1/100 … (1 in 200)
  • Number needed to treat is better measure
    • reciprocal of risk reduction e.g. 4 in first (25/100)
    • 200 in the second (0.5/100)
  • If cost of treatment is £10,000 ………. !!
number needed to treat
Number needed to treat
  • Control event rate is 9 cases in 30 (0.3)
  • Experimental event rate is 1 case in 29 (0.033)

Then, NNT = 1/(CER - EER)

= 1/(0.3-0.033)

= 4

  • This method corrects for relative and absolute risk by relating to the control event rate
number needed to treat1
Number needed to treat
  • Diabetic neuropathy 6.5 year prospective trial
    • 9.6% developed DN (conventional)
    • 2.8% developed DN (intensive treatment)
  • Relative risk reduction = (9.6-2.8)/9.6 = 71%
  • Absolute risk reduction = 9.6-2.8 = 6.8%
  • Number needed to treat = 1/.068 = 15 people for 6.5 years to prevent one case of DN
odds ratios
Odds ratios
  • OR are used where it is difficult to calculate the relative risk e.g. case control studies
  • A value greater than 1 assumes increased risk
  • Confidence intervals (95%) will give the overall picture (e.g. if CI crosses 1 then the result may not be significant
odds ratio calculation
Odds ratio calculation
  • Calculated as the ratio of the results of the control group divided by the experimental group
  • (9/21) divided by (1/29) = 0.08
  • The relationship between OR and NNT is not linear and is very confusing … even to statisticians!
evidence based medicine
Evidence based medicine

The process of systematically finding, appraising and using contemporaneous research findings as a basis for clinical decisions

evidence based medicine1
Evidence based medicine
  • Accurate identification of the clinical question to be investigated
  • a search of the literature to select relevant articles
  • evaluation of the evidence
  • implementation of the findings into clinical practise