Skip this Video
Download Presentation
Statistics for the Terrified: Trial Design and Sample Size

Loading in 2 Seconds...

play fullscreen
1 / 51

Statistics for the Terrified: Trial Design and Sample Size - PowerPoint PPT Presentation

  • Uploaded on

Statistics for the Terrified: Trial Design and Sample Size. Andrea Marshall Mark Williams. By the end of this session you will be aware of The different types of trial designs and why we use them Importance of sample size and requirements needed to be able calculate it

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Statistics for the Terrified: Trial Design and Sample Size' - fifi

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Statistics for the Terrified:

Trial Design and Sample Size

Andrea Marshall

Mark Williams

learning objectives
By the end of this session you will be aware of

The different types of trial designs and why we use them

Importance of sample size and requirements needed to be able calculate it

Types of statistical errors

Statistical power

Learning Objectives
clinical trial
Clinical Trial
  • A prospective experiment
  • Designed to assess the effect of one or more interventions in a group of patients
phase ii studies
Phase II Studies
  • Generally small studies
  • 1 or more interventions
  • Provide initial assessment of efficacy
  • Identify promising new interventions for further evaluation and screen out ineffective interventions
  • Not able to compare interventions
choice of control
Choice of control
  • Uncontrolled – no control – may be useful for small phase II studies looking to show a intervention is feasible or effective
  • Historical control – trial compares patients treated with new intervention with earlier series of patients – but may not be truly comparable
  • Controlled trial –intervention is compared to standard in patients over the same period of time
  • Matched controls – patients allocated to intervention are matched to those on the standard intervention according to specific characteristics
types of phase iii trial design
Types of Phase III trial design
  • Parallel group
  • Factorial
  • Cross-over
  • Cluster-randomised
  • Adaptive designs
parallel group design
Parallel group design
  • Most simple design
  • Compares at least one intervention with the standard in patients at the same time
  • Can consider two or more arms - i.e. have 2 or more intervention options but have to be confident can get enough patients
  • Patients on each arm are similar and only the allocated interventions differ
  • Allocation to trial arms is made at random
example parallel group design
Example - Parallel group design
  • SARAH trial: Assessing the effectiveness of exercise programme over and above usual care

Usual Care

Sample: People with Rheumatoid Arthritis of the hand

Usual care + Exercise

factorial design
Factorial design
  • Allows more than 1 question to be answered using the same patients and therefore less patients are needed
  • If interested in two types of therapy
  • Alternative to running 2 parallel group trials instead
factorial design 2
Factorial design (2)
  • 1. Looking at role of suncreen, i.e. Suncreen versus no sunscreen
  • 2. Looking at role of betacarotene,
  • i.e. betacarotene versus no betacarotene













factorial design 4 example
Factorial design (4) Example



1. Looking at role of sunscreen

2. Looking at role of Betacarotene

Sunscreen + betacarotene tabs (n=404)

Sunscreen + placebo tabs (n=408)

Placebo tabs only (n=393)

Betacarotene tabs only (n=416)

No sunscreen (n=809)

BC and Placebo tabs only

Sunscreen (n=812)

Sunscreen+BC and Sunscreen+placebo

No BC (n=801)

Sunscreen+placebo and placebo tabs only

BC (n=820)

Sunscreen+BC and BC tabs only

crossover designs
Crossover designs
  • Patients are allocated to a sequence of two or more interventions
  • Classified by number of interventions allocated to each patient
    • i.e. 2-period crossover = each person receives each of the 2 interventions in a random order


crossover designs 2
Crossover designs (2)
  • Limited to situations where:
    • Disease is chronic and stable
    • Different interventions can be administered to same patient for a short period of time
    • Patients return to the same state afterward each intervention s to avoid a carry over effect
  • Wash out period should be long enough for complete reversibility of intervention effect
  • Generally fewer patients required as interventions evaluated within the same patients
cross over design 3 example
Cross over design (3)- example
  • Sleep apnoea victim?
  • Engleman et al (1999) Randomised placebo-controlled cross-over trial of CPAP for mild sleep apnea/hypopnea syndrome


cluster design
Cluster design
  • Patients within any one cluster are often more likely to respond in a similar manner
  • Intervention is delivered to or affects groups
    • E.g. Group exercise program
  • Intervention is targeted at health professionals
    • E.g. Educational regarding disease management
  • Contamination between individuals
cluster design 2 e g opera
Cluster design (2) E.g. OPERA

Intervention – Physical activation programme

Care Homes

Control – Depression Awareness

adaptive designs
Adaptive designs
  • Ability to modify the study without undermining the validity and integrity of the trial
  • Can save time and money
  • Need short-term evaluable
  • Possible modifications include:-
    • Re-estimating sample size
    • Stopping trial early
    • Dropping interventions
    • Changing the eligibility criteria
size of trial
Size of trial
  • Important to determine precisely at the design stage
  • Minimum number of subjects
    • To reliably answer the question
    • Avoid patients unnecessarily receiving an inferior treatment
  • Want trials to have
    • High chance of detecting clinically important differences
    • Small chance of observing differences when they do not exist
sample size
In order to calculate the required sample size we need to know:

Trial design

Primary outcome and how it will be analysed

Hypothesis being tested – superiority/non-inferiority/equivalence

Significance level (generally set to 5%)

Power (generally 80-90%.)

Sample Size
additional information
Additional information
  • What is expected on the standard arm (based on previous trials or experience)
  • Size of a clinically relevant difference
  • Expected dropout/loss to follow-up rate
primary outcome
Primary outcome
  • Ideally only one
  • Must be pre-defined
  • Validated
  • Most clinically relevant outcome
  • Able to provide convincing evidence to answer the primary objective
types of primary outcomes
Types of primary outcomes
  • Binary – yes/no,
    • E.g. toxicity suffered or response to intervention
  • Continuous
    • E.g. Quality of life scores from the SF-12
  • Time to event
    • E.g. Overall survival or time to hospital discharged
difference superiority trials
Difference/Superiority trials
  • To determine the effectiveness of a new intervention relative to a control
  • Need to know what is a clinically relevant improvement/difference
  • If no statistically significant difference, then cannot conclude interventions are equivalent only that NOT sufficient evidence to prove a difference

“Absence of evidence is not evidence of absence”

non inferiority trials
Non-inferiority trials
  • To determine if new intervention is no worse (by a certain amount) than control but maybe associated with e.g. less severe toxicity or better quality of life
  • Need to know the largest difference to be judged clinically acceptable
  • One sided test as only looking at whether no worse
  • Generally need more patients than superiority trials
equivalence trials
Equivalence trials
  • Trying to show that a new intervention only differs by a clinically unimportant difference
  • Need to know the size of equivalence margin, i.e. a difference which is clinically unimportant
  • Two sided test
  • Generally need more patients than superiority trials
  • Type I error () – chance of detecting a difference when there is none
  • Type II error ( ) – chance of failing to detect a difference when it DOES exists
  • Power (1-) – chance of detecting the difference if it DOES exists
sample size calculations
Sample size calculations

Important to

  • Allow for dropouts/loss to follow-up in calculations
  • Investigate sensitivity of sample size estimates to deviations from assumptions used
  • Be able to answer secondary outcomes
sample size for binary outcome
Sample size for binary outcome
  • Proportion of successes on control arm (P0)
  • Difference you want to detect or the proportion (P1) expected with the new intervention
  • E.g. With 5% two-sided significance level (=0.05 ),

80% power for standard 2-arm parallel group trial with a 1:1 allocation and P0 = 0.25 and P1 = 0.40, i.e. 15% absolute differences in proportion of responses

Sample size required is 152 patients in each arm giving a minimum total of 304 patients

changes to the assumptions
Changes to the assumptions
  • Numbers given are for the sample size in each arm
sample size for continuous outcome
Sample size for continuous outcome
  • Difference you want to detect in the means of the outcome and Standard deviation (SD)
  • E.g. In MINT,
    • 1% two-sided significance level (=0.01 ),
    • 90% Power
    • To detect a difference of 3 points on the Neck Disability Index with an estimated SD of 8

Sample size required is 211 patients in each arm

giving a minimum total of 422 patients

sample size for continuous outcome1
Sample size for continuous outcome
  • Or standardised difference/effect size
    • difference in means divided by the SD
    • small (0.2), median (0.3) or large (0.6)
sample size for time to event outcome
Sample size for time to event outcome
  • Survival rate at a particular time-point, e.g. 2 years, or the median survival time (time at which 50% of patients have experienced the event) for those on the control arm
  • Difference wanting to detect
  • Also can depend on
    • Expected recruitment rate
    • Duration of follow-up after recruitment closes
sample size for time to event outcome 2
Sample size for time to event outcome (2)

E.g. In COUGAR-2,

  • To detect a 2 month improvement in median overall survival from 4 months on the control arm to 6 months on the intervention arm
  • 5% two-sided significance level (=0.05 ),

90% Power

  • 2 year recruitment period with analysis 6 months after completed recruitment

Sample size required is 146 patients in each arm giving a minimum total of 292 patients

sample size for cluster trials
Sample size for cluster trials
  • Need to inflate sample size for the primary outcome to take into account clustering by the Design effect
    • n = average number to be recruited per cluster
    • ρ = Intracluster correlation coefficient (ICC)
      • Statistical measure for the level of between cluster variation
      • Values between 0 and 1 (higher values represent greater between cluster variability)
      • 0.05 is often used
sample size for cluster trials 2
Sample size for cluster trials (2)
  • Number of clusters
    • Better to have more clusters than a large number of patients in fewer clusters
    • Even if the overall number of patients is large if the number of clusters is inadequate the trial will be underpowered
      • E.g. Trials with 2 clusters equivalent to a trial with 2 patients
    • Absolute minimum of 4 clusters per arm
sample size for cluster trials 3 example
Sample size for cluster trials (3) Example
  • E.g. In OPERA
    • Clusters: Residential and nursing accommodation (RNH)
    • Control = depression awareness programme versus Intervention = exercise programme
    • Primary Outcome = Proportion of residents depressed at end of trial (Geriatric depression scale 15 score <5)
    • Clinical important benefit = 15% reduction from 40% for controls to 25% for the intervention
    • 80% power and 5% significance level
    • Allocation of 1.5 controlsto 1 intervention

A total sample size of 343 is needed for a patient randomised trial

sample size for cluster trials 4
Sample size for cluster trials (4)
  • To adjust for clustering
    • ICC=0.05
    • Average cluster size =15
    • Design effect = 1.7
    • So Total sample size = 343 * 1.7 = 583
  • A total sample size of 583 patients with assessments at end of the trial is needed (233 in intervention arm and 350 in control arm)
  • At least 39 clusters (RNH) are required (=583/15)
sample size for cluster trials 5
Sample size for cluster trials (5)
  • Effect of varying ICC and number of patients per cluster
  • Phase of clinical trial depends on evidence already acquired
  • Phase III trials = RCT
  • Trial design depends on
    • Outcome and questions to be answered
    • Number and type of Interventions being compared
    • Unit of allocation and controlling of bias
  • Bigger sample sizes gives us increased confidence in our results (but there is a limit!)
  • Always consult a statistician!!!
which type of trial would you need a washout period incorporated into the design
Parallel design

Factorial design

Cross-over design

Cluster design

Which type of trial would you need a washout period incorporated into the design?




which type of trial is likely to require the largest sample size
Phase I trial

Phase II trial

Phase III (parallel) trial

Phase III (cluster) trial

Which type of trial is likely to require the largest sample size?




Clinically relevant difference

Standard deviation


Significance level

Which of the following information do you NOT need to calculate the sample size for a binary outcome?




Detect differences of 15%

Use 85% power

Use 1% significance level

All of the above

None of above

If sample size to detect 10% differences in a binary outcome with 5% significance level and 80% power is not obtainable, how could you decrease the sample size required?



More patients with Trial A

More patients with Trial B

More patients with Trial C

Same for all trials

If wanting to detect with a continuous outcome: Trial A: difference in means of 5, SD = 10Trial B: difference in means of 4, SD = 8 Trial C: Standardised difference of 0.5would you need ... ?




  • The handbook of clinical trials and other research Ed. Alan Earl-Slater
  • Sample size tables for clinical studies. Machin, Campbell, Fayer and Pinol.