1 / 111

Biases and errors in Epidemiology

Biases and errors in Epidemiology. Anchita Khatri. Definitions . ERROR: A false or mistaken result obtained in a study or experiment

chavi
Download Presentation

Biases and errors in Epidemiology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Biases and errors in Epidemiology Anchita Khatri

  2. Definitions ERROR: • A false or mistaken result obtained in a study or experiment • Random error is the portion of variation in measurement that has no apparent connection to any other measurement or variable, generally regarded as due to chance • Systematic error which often has a recognizable source, e.g., a faulty measuring instrument, or pattern, e.g., it is consistently wrong in a particular direction (Last)

  3. Relationship b/w Bias and Chance True BP (intra-arterial cannula) BP measurement (sphygmomanometer) No. of observations Chance Bias 80 90 Diastolic Blood Pressure (mm Hg)

  4. Validity • Validity: The degree to which a measurement measures what it purports to measure (Last) Degree to which the data measure what they were intended to measure – that is, the results of a measurement correspond to the true state of the phenomenon being measured (Fletcher) • also known as ‘Accuracy’

  5. Reliability • The degree of stability expected when a measurement is repeated under identical conditions; degree to which the results obtained from a measurement procedure can be replicated (Last) • Extent to which repeated measurements of a stable phenomenon – by different people and instruments, at different times and places – get similar results (Fletcher) • Also known as ‘Reproduciblity’ and ‘Precision’

  6. Validity and Reliability

  7. Bias • Deviation of results or inferences from the truth, or processes leading to such deviation. Any trend in the collection, analysis, interpretation, publication, or review of data that can lead to conclusions that are systematically different from the truth. (Last) • A process at any stage of inference tending to produce results that depart systematically from true values (Fletcher)

  8. Types of biases • Selection bias • Measurement / (mis)classification bias • Confounding bias

  9. Selection bias • Errors due to systematic differences in characteristics between those who are selected for study and those who are not. (Last; Beaglehole) • When comparisons are made between groups of patients that differ in ways other than the main factors under study, that affect the outcome under study. (Fletcher)

  10. Examples of Selection bias • Subjects: hospital cases under the care of a physician • Excluded: • Die before admission – acute/severe disease. • Not sick enough to require hospital care • Do not have access due to cost, distance etc. • Result: conclusions cannot be generalized • Also known as ‘Ascertainment Bias’ (Last)

  11. Ascertainment Bias • Systematic failure to represent equally all classes of cases or persons supposed to be represented in a sample. This bias may arise because of the nature of the sources from which the persons come, e.g., a specialized clinic; from a diagnostic process influenced by culture, custom, or idiosyncracy. (Last)

  12. Selection bias with ‘volunteers’ • Also known as ‘response bias’ • Systematic error due to differences in characteristics b/w those who choose or volunteer to take part in a study and those who do not

  13. Examples …response bias • Volunteer either because they are unwell, or worried about an exposure • Respondents to ‘effects of smoking’ usually not as heavy smokers as non-respondents. • In a cohort study of newborn children, the proportion successfully followed up for 12 months varied according to the income level of the parents

  14. Examples…. (Assembly bias) • Study: ? association b/w reserpine and breast cancer in women • Design: Case Control • Cases: Women with breast cancer Controls: Women without breast cancer who were not suffering from any cardio-vascular disease (frequently associated with HT) • Result: Controls likely to be on reserpine systematically excluded  association between reserpine and breast cancer observed

  15. Examples…. (Assembly bias) • Study: effectiveness of OCP1 vs. OCP2 • Subjects: on OCP1 – women who had given birth at least once ( able to conceive) on OCP2 – women had never become pregnant • Result: if OCP2 found to be better, inference correct??

  16. Susceptibility Bias • Groups being compared are not equally susceptible to the outcome of interest, for reasons other than the factors under study • Comparable to ‘Assembly Bias’ • In prognosis studies; cohorts may differ in one or more ways – extent of disease, presence of other diseases, the point of time in the course of disease, prior treatment etc.

  17. Examples…..(Susceptibility Bias) • Background: for colorectal cancer, - CEA levels correlated with extent of disease (Duke’s classification) • Duke’s classification and CEA levels strongly predicted diseases relapse • Question: Does CEA level predict relapse independent of of Duke’s classification, or was susceptibility to relapse explained by Duke’s classification alone?

  18. Example… CEA levels (contd.) • Answer: association of pre-op levels of CEA to disease relapse was observed for each category of Duke’s classification stratification

  19. Disease-free survival according to CEA levels in colorectal cancer pts.with similar pathological staging (Duke’s B) 100 80 CEA Level (ng) % disease free <2.5 2.5 – 10.0 60 >10.0 0 3 6 9 12 15 18 21 24 Months

  20. Selection bias with ‘Survival Cohorts’ • Patients are included in study because they are available, and currently have the disease • For lethal diseases patients in survival cohort are the ones who are fortunate to have survived, and so are available for observation • For remitting diseases patients are those who are unfortunate enough to have persistent disease • Also known as ‘Available patient cohorts’

  21. Example… bias with ‘survival cohort’ Observed improvement TRUE COHORT True improvement Measure outcome Improved: 75 Not improved: 75 Assemble Cohort (N=150) 50% 50% SURVIVAL COHORT Assemble patients Begin Follow-up (N=50) Measure outcome Improved: 40 Not improved: 10 50% 80% Not observed (N=100) Dropouts Improved: 35 Not improved: 65

  22. Selection bias due to ‘Loss to Follow-up’ • Also known as ‘Migration Bias’ • In nearly all large studies some members of the original cohort drop out of the study • If drop-outs occur randomly, such that characteristics of lost subjects in one group are on an average similar to those who remain in the group, no bias is introduced • But ordinarily the characteristics of the lost subjects are not the same

  23. Example of ‘lost to follow-up’ EXPOSURE irradiation EXPOSURE irradiation 30 30 DISEASE cataract RR= 50/10000 100/20000 = 1 RR= 30/4000 30/8000 = 2

  24. Migration bias • A form of Selection Bias • Can occur when patients in one group leave their original group, dropping out of the study altogether or moving to one of the other groups under study (Fletcher) • If occur on a large scale, can affect validity of conclusions. • Bias due to crossover more often a problem in risk studies, than in prognosis studies, because risk studies go on for many years

  25. Example of migration • Question: relationship between lifestyle and mortality • Subjects: 10,269 Harvard College alumni • classified according to physical activity, smoking, weight, BP • In 1966 and 1977 • Mortality rates observed from 1977 to 1985

  26. Example of migration (contd.) • Problem: original classification of ‘lifestyle’ might change (migration b/w groups) • Solution: defined four categories • Men who maintained high risk lifestyles • Men who crossed over from low to high risk • Men who crossed over from high to low risk • Men who maintained low risk lifestyles

  27. Example of migration (contd.) • Result: after controlling for other risk factors • those who maintained or adopted high risk characteristics had highest mortality • Those who changed from high to low had lesser mortality than above • Those who never had any high risk behavior had least mortality

  28. Healthy worker effect • A phenomenon observed initially in studies of occupational diseases: workers usually exhibit lower overall death rates than the general population, because the severely ill and chronically disabled are ordinarily excluded from employment. Death rates in the general population may be inappropriate for comparison if this effect is not taken into account. (Last)

  29. Example…. ‘healthy worker effect’ • Question: association b/w formaldehyde exposure and eye irritation • Subjects: factory workers exposed to formaldehyde • Bias: those who suffer most from eye irritation are likely to leave the job at their own request or on medical advice • Result: remaining workers are less affected; association effect is diluted

  30. Measurement bias • Systematic error arising from inaccurate measurements (or classification) of subjects or study variables. (Last) • Occurs when individual measurements or classifications of disease or exposure are inaccurate (i.e. they do not measure correctly what they are supposed to measure) (Beaglehole) • If patients in one group stand a better chance of having their outcomes detected than those in another group. (Fletcher)

  31. Measurement / (Mis) classification • Exposure misclassification occurs when exposed subjects are incorrectly classified as unexposed, or vice versa • Disease misclassification occurs when diseased subjects are incorrectly classified as non-diseased, or vice versa (Norell)

  32. Causes of misclassification • Measurement gap: gap between the measured and the true value of a variable • Observer / interviewer bias • Recall bias • Reporting bias 2. Gap b/w the theoretical and empirical definition of exposure / disease

  33. Sources of misclassification Measurement results Measurement errors Empirical definition Gap b/w theoretical & empirical definitions Theoretical definition

  34. Theoretical definition Exposure: passive smoking – inhalation of tobacco smoke from other people’s smoking Disease: Myocardial infarction – necrosis of the heart muscle tissue Empirical definition Exposure: passive smoking – time spent with smokers (having smokers as room-mates) Disease: Myocardial infarction – certain diagnostic criteria (chest pain, enzyme levels, signs on ECG) Example… ‘gap b/w definitions’

  35. Exposure misclassification – Non-differential • Misclassification does not differ between cases and non-cases • Generally leads to dilution of effect, i.e. bias towards RR=1 (no association)

  36. Example…Non-differential Exposure Misclassification EXPOSURE X-ray exposure EXPOSURE X-ray exposure DISEASE Breast Cancer RR= 60/20000 60/30000 = 1.5 RR= 40/10000 80/40000 = 2

  37. Exposure misclassification - Differential • Misclassification differs between cases and non-cases • Introduces a bias towards RR= 0 (negative / protective association), or RR= α (infinity)(strong positive association)

  38. Example…Differential Exposure Misclassification EXPOSURE X-ray exposure EXPOSURE X-ray exposure DISEASE Breast Cancer RR= 40/10000 80/40000 = 2 RR= 40/19980 80/30020 = 0.75

  39. Implications of Differential exposure misclassification • An improvement in accuracy of exposure information (i.e. no misclassification among those who had breast cancer), actually reduced accuracy of results • Non-differential misclassification is ‘better’ than differential misclassification • So, epidemiologists are more concerned with comparability of information than with improving accuracy of information

  40. Causes of Differential Exposure Misclassification • Recall Bias:Systematic error due to differences in accuracy or completeness of recall to memory of past events or experience. For e.g. patients suffering from MI are more likely to recall and report ‘lack of exercise’ in the past than controls

  41. Causes of Differential Exposure Misclassification • Measurement bias: e.g. analysis of Hb by different methods (cyanmethemoglobin and Sahli's) in cases and controls. e.g.biochemical analysis of the two groups from two different laboratories, which give consistently different results

  42. Causes of Differential Exposure Misclassification • Interviewer / observer bias: systematic error due to observer variation (failure of the observer to measure or identify a phenomenon correctly) e.g. in patients of thrombo-embolism, look for h/o OCP use more aggressively

  43. Measurement bias in treatment effects • Hawthorne effect: effect (usually positive / beneficial) of being under study upon the persons being studied; their knowledge of being studied influences their behavior • Placebo effect: (usually, but not necessarily beneficial) expectation that regimen will have effect, i.e. the effect is due to the power of suggestion.

  44. Total effects of treatment are the sum of spontaneous improvement, non-specific responses, and the effects of specific treatments EFFECTS IMPROVEMENT 

  45. Confounding • A situation in which the effects of two processes are not separated. The distortion of the apparent effect of an exposure on risk brought about by the association with other factors that can influence the outcome • A relationship b/w the effects of two or more causal factors as observed in a set of data such that it is not logically possible to separate the contribution that any single causal factor has made to an effect (Last)

  46. Confounding When another exposure exists in the study population (besides the one being studied) and is associated both with disease and the exposure being studied. If this extraneous factor – itself a determinant of or risk factor for health outcome is unequally distributed b/w the exposure subgroups, it can lead to confounding (Beaglehole)

  47. Confounder … must be • Risk factor among the unexposed (itself a determinant of disease) • Associated with the exposure under study • Unequally distributed among the exposed and the unexposed groups

  48. Examples … confounding SMOKING LUNG CANCER (As age advances chances of lung cancer increase) AGE (If the average ages of the smoking and non-smoking groups are very different)

  49. Examples … confounding HEART DISEASE COFFEE DRINKING (Smoking increases the risk of heart ds) (Coffee drinkers are more likely to smoke) SMOKING

  50. Examples … confounding ALCOHOL INTAKE MYOCARDIAL INFARCTION (Men are more likely to consume alcohol than women) (Men are more at risk for MI) SEX

More Related