1 / 42

Fundamental Principles of Epidemiologic Study Design

Fundamental Principles of Epidemiologic Study Design. F. Bruce Coles, D.O. Assistant Professor Louise-Anne McNutt, PhD Associate Professor University at Albany School of Public Health Department of Epidemiology and Biostatistics. What are the primary goals of the Epidemiologist?.

emelda
Download Presentation

Fundamental Principles of Epidemiologic Study Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fundamental Principles of Epidemiologic Study Design F. Bruce Coles, D.O. Assistant Professor Louise-Anne McNutt, PhD Associate Professor University at Albany School of Public Health Department of Epidemiology and Biostatistics

  2. What are the primary goals of the Epidemiologist? To describe the frequency of disease and its distribution To assess the determinants and possible causes of disease To identify/develop effective interventions to prevent or control disease

  3. The Basic Question… Are exposure and disease linked? D E (Does the exposure causethe disease?)

  4. My son, what do you mean by the true effect? So, you want a descriptive incidence proportion ratio? For who or whom do you want it? An individual? A population? If the latter, which one? And for what time period?? The value of a causal incidence proportion ratio can be different for different groups of people and for different time periods. It is not necessarily a biological constant, you know… No…causal! I want to know the real effect of the exposure isolated from all other possible causes… The true risk if a person is exposed versus if they are not exposed… A true relative risk… Please! Show me! What is the true effect of the exposure on the occurrence of the disease? Okay. Comparing what two exposure levels? What? I… Uh… Everyone in the entire population? And calculate it for one year… What do you mean by exposed and unexposed? Exposed how much, for how long, and in what time period? There are a lot of different ways you could define exposed and unexposed…and each of the corresponding possible ratios can have a different true value, you know. My son, you want an absolute counterfactual… I’m only the God of Epi…not a miracle worker. Exposed versus unexposed… Oh please, Great God of Epidemiology… Why does it have to be so hard? I’ll take any exposure, for any amount of time versus no exposure at all. How’s that? Based upon: Maldonado G and Greenland S. Estimating Causal Effects Int J Epidemiol 2002; 31: 422-29.

  5. Counterfactual Analysis “We may define a cause to be an object followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. Or, in other words, where, if the first object had not been, the second never had existed.” David Hume, Philosopher (1748)

  6. Counterfactual Model of Causation E D is a cause of if… under actual (factual) conditions when E occurs, D follows… and… if under conditions contrary to the actual conditions (counter-factual) when E does not occur, D does not occur

  7. If we can turn back time… We can answer the question!

  8. But, Oh Great God of Epidemiology… If I can’t go back in time and observe the unobservable…what CAN I do to determine the cause of a disease?

  9. You must, my son, move from the dream of theoretical perfection to the next best thing… Substitution!!! Once you clearly define your study question, choose a target population that corresponds to that question… …then choose a study design and sample subjects from that target population to balance the tradeoffs in bias, variance, and loss to follow-up…

  10. Epidemiologic Study Designs Experimental Observational (Randomized Controlled Trials) Analytical Descriptive Case-Control Cohort + cross-sectional & ecologic

  11. Epidemiologic Study Designs Descriptive Studies Examine patterns of disease Analytical studies Studies of suspected causes of diseases Experimental studies Compare treatment or intervention modalities

  12. Epidemiologic Study Designs Grimes & Schulz, 2002

  13. Hierarchy of Epidemiologic Study Design Tower & Spector, 2007

  14. When considering any etiologic study, keep in mind two issues related to participant (patient) heterogeneity: the effect of chance and the effect of bias… We will descend the ladder of perfection. So we begin with… Randomized trials

  15. Randomized Controlled Trials • Recommended to achieve a valid determination of the comparative benefit of competing intervention strategies: - Prevention - Screening - Treatment - Management Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  16. RCT: Participant Characteristics • Continuum: healthy > elevated risk > precursor abnormality (preclinical) > disease • Prevention or Screening trial: • drawn from “normal” healthy population • may be selected due to elevated (“high”) risk • Treatment or Management trial: • clinical trials • diseased patients Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  17. RCT: Phase Objectives • Phase I • Safety test: investigate dosage, route of administration, and toxicity • Usually not randomized • Phase II • Look for evidence of “activity” of an intervention, e.g., evidence of tumor shrinkage, change in biomarker • Tolerability • May be small randomized, blinded or non-randomized • Phase III • Randomized design to investigate “efficacy” i.e., most ideal conditions • Phase IV • Designed to assess “effectiveness” of proven intervention in wide-scale (“real world”) conditions Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  18. RCT: Phases When we talk about a “clinical trial” of medications we almost always mean a Phase III clinical trial.

  19. Efficacy vs. Effectiveness • Efficacy – does the intervention work in tightly controlled conditions? • Strict inclusion/exclusion criteria • Highly standardized treatments • Explicit procedures for ensuring compliance • Focus on direct outcomes

  20. Efficacy vs Effectiveness • Effectiveness – does the intervention work in ‘real world’ conditions? • Looser inclusion/exclusion criteria • Treatments carried out by typical clinical personnel • Little or no provision for insuring compliance • Focus on less direct outcomes (e.g., quality of life)

  21. RCT: Advantages • investigator controls the predictor variable (intervention or treatment) • randomization controls unmeasured confounding • ability to assess causality much greater than observational studies

  22. RCT: Design outcome RANDOMIZATION Intervention no outcome Study population outcome Control no outcome baseline future time Study begins here (baseline point)

  23. RCT: Steps in study procedures • Select participants • high-risk for outcome (high incidence) • Likely to benefit and not be harmed • Likely to adhere • Pre-trial run-in period?

  24. RCT: Pre-trial “Run-in” Period • Pro • Provides stabilization and baseline • Tests endurance/reliability of subjects • Con • Can be perceived as too demanding

  25. RCT: Steps in study procedures 2. Measure baseline variables 3. Randomize • Eliminates baseline confounding • Types • Simple – two arm • Stratified – multi-arm; factorial • Block - group

  26. RCT: Steps in study procedures • Assess need for blinding the intervention • Can be as important as randomization • Eliminates • co- intervention • biased outcome ascertainment • biased measurement of outcome • Follow subjects • Adherence to protocol • Lost to follow up • Measure outcome • Clinically important measures • Adverse events

  27. RCT: Design Concepts (5 questions) • Why is the study being done? • the objectives should be clearly defined • will help determine the outcome measures • single primary outcome with limited secondary outcome • measures • What is being compared to what? • two-arm trial: experimental intervention vs nothing, placebo • standard intervention, different dose or duration • multi-arm • factorial • groups Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  28. RCT: Design Concepts (5 questions) • Which type of intervention is being assessed? • well-defined • tightly controlled: new intervention • flexible: assessing one already in use • multifaceted? • Who is the target population? • eligibility • = restriction: enhance statistical power • by having a more homogeneous group, higher • rate of outcome events, higher rate of benefit • = practical consideration: accessible • include: • = potential to benefit • = effect can be detected • = those most likely to adhere • exclude: • = unacceptable risk • = competing risk (condition) Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  29. RCT: Design Concepts (5 questions) • How many should be enrolled? • ensure power to detect intervention effect • increase sample size to increase precision • of estimate of intervention effect (decreases • variability (standard error) • subgroups • = requires increased sample size • = risk of spurious results increases with • a greater number of subgroup analyses Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  30. RCT: Factorial Design • answer more than one question by addressing • more than one comparison of interventions Intervention A Intervention not-A Intervention B Intervention not-B Intervention B Intervention not-B • important that the two interventions can be given • together (mechanism of action differ) • - no serious interactions expected • - interaction effect is of interest Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  31. RCT: Group Randomization • settings • - communities - village • - workplace - schools or classrooms • - religious institutions - social organizations • - families - clinics • concerns • less efficient statistically than individual randomization • must account for correlation of individuals within a cluster • must assure adequate sample size Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  32. RCT: Group Randomization • advantages • feasibility of delivering intervention • avoids contamination of those assigned to different • intervention • decrease cost • possibly greater generalizability • intervention applications • behavioral and lifestyle interventions • infectious disease interventions (vaccines) • studies of screening approaches • health services research • studies of new drugs (or other agents) in short supply Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  33. RCT: Maintaining the Integrity of Randomization • The procedure for randomization should be: • unbiased • unpredictable (for participants and study personnel • recruiting and enrolling them) • Timing • randomize after determining eligibility • avoid delays in implementation to minimize possibility • of participants becoming non-candidates • Run-in period • brief • all participants started on same intervention • those who comply are enrolled Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  34. RCT: Blinding (Masking) • Single blind – participants are unaware of treatment • group • Double blind – both participants and investigators • are unaware • Triple blind – various meanings • person performing tests • outcome auditors • safety monitoring groups* • (* some clinical trials experts oppose this practice – inhibits ability to weigh benefits and adverse effects and to assure • ethical standards are maintained)

  35. RCT: Blinding (Masking) Why blind? … To avoid biased outcome, ascertainment or adjudication • If group assignment is known • - participants may report symptoms or outcomes differently • physicians or investigators may elicit symptoms or outcomes • differently • study staff or adjudicators may classify similar events • differently in treatment groups • Problematic with “soft” outcomes • investigator judgment • participant reported symptoms, scales

  36. RCT: Why Blind? … Co-interventions • Unintended effective interventions • participants use other therapy or change behavior • study staff, medical providers, family or friends treat participants differently • Nondifferential - decreases power • Differential - causes bias

  37. RCT: Blinding (Masking) • Feasibility depends upon study design • yes: drug – placebo trial • no: surgical vs medical intervention • no: drug with obvious side effects • trials with survival as an outcome little effected by • inability to mask the observer • - independent, masked observer may be used for: • = studies with subjective outcome measures • = studies with objective endpoints (scans, • photographs, histopathology slides, • cardiograms) Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  38. RCT: Intension-to-treat Analysis • includes all participants regardless of what occurs • after randomization • maintains comparability in expectation across • intervention groups • excluding participants after randomization introduces • bias that randomization was designed to avoid • may not be necessary with trials using a • pre-randomization screening test with results • available after intervention begins provided • eligibility is not influenced by randomized • assignment • must consider impact of noncompliance Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  39. RCT: Accounting for loss to follow-up (LTF) • decrease power • - remedy: inflate sample size to account for expected LTF • increase bias • - difficult: design study (and consent process) to follow • participants that drop out Green SB. Design of Randomized Trials. Epidemiol Reviews 2002;24:4-11

  40. RCT: Analysis • Intention to treat analysis • Most conservative interpretation • Include all persons assigned to intervention group (including those who did not get treatment or dropped out) • Subgroup analysis • Groups identified pre-randomization

  41. The Ideal Randomized Trial • Tamper-proof randomization • Blinding of participants, study staff, lab staff, outcome ascertainment and adjudication • Adherence to study intervention and protocol • Complete follow-up

More Related