1 / 31

Thomas Songer, PhD

Introduction to Research Methods In the Internet Era. Assessing Validity of Association. Bias. Thomas Songer, PhD. Learning Objectives: 1. Identify the possible alternative explanations for statistical associations: --- Chance --- Bias --- Confounding

billye
Download Presentation

Thomas Songer, PhD

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Research Methods In the Internet Era Assessing Validity of Association Bias Thomas Songer, PhD

  2. Learning Objectives: 1. Identify the possible alternative explanations for statistical associations: --- Chance --- Bias --- Confounding 2. Distinguish between the major types of bias in epidemiologic studies. 2

  3. Research Process Research question Hypothesis Identify research design Data collection Presentation of data Data analysis Interpretation of data 3 Polgar, Thomas

  4. Epidemiologic Reasoning Assess validity of association • true relationship between the exposure and disease • Does the observed association really exist? • Is the association valid? • Are there alternative explanations for the association? • Chance (Random Error) • Bias (Systematic Error) • Confounding 4

  5. Evaluating Associations A “valid” statistical association implies “Internal Validity” in the study Internal Validity:The results of an observation are correct for the particular group being studied What about“external validity”? Do the results of the study apply(“generalize”)to to people who were not in the study (e.g. the target population)? 5

  6. Evaluating Associations • Internal Validity • Strength of the measurement tools, assessment methods of exposure and outcome variables in the study, and control for study effects • External Validity -- strength of the study sample with regards to generalizability 6

  7. Threats to validity in research studies • Random error • Sample size • Systematic error • Selection bias • Measurement bias • Loss to follow-up • Hawthorne Effect • Confounding • Regression to the mean 7

  8. Evaluating Associations Note:DO NOT compromise internal validity in the goal of generalization * An invalid result cannot be generalized * Thus, internal validity should never be compromised in an attempt to achieve generalizability 8

  9. Evaluating Associations Note:Keep in mind that even if chance, bias, and confounding have been sufficiently ruled out (or taken into account), it does not necessarily mean that the valid association observed is causal. The observed association may simply be acoincidence. (i.e. In the last 10, years, incidence rates for prostate cancer have increased, as have sales of plasma TV screens). 9

  10. How do we know that the associations observed in epidemiologic studies are real? 10

  11. Evaluating Associations • Evaluating thevalidityof an association: • In any epidemiologic study, there are at least 3 • alternative explanations for the observed results: • CHANCE (random error) • 2. BIAS (systematic error) • CONFOUNDING • These explanationsare not mutually exclusive-- • more than one can be present in the same study 11

  12. Bias or Systematic Error • Systematic, non-random, deviation of results from the truth high systematic error low systematic error 12

  13. Bias • Potential biases must be considered and addressed in all epidemiologic studies • We often assume that exposed and unexposed groups are comparable • This is not necessarily true 13

  14. Systematic Error (Bias) • BIAS:Systematic error in thedesign, conduct, or analysisof a study that results in a mistaken estimate of an exposure/disease relationship • SELECTION BIAS • INFORMATION BIAS • * Recall Bias • * Interviewer Bias • * Reporting Bias • * Surveillance Bias 14

  15. Selection Bias • A distortion in a measure of disease frequency or association resulting from the manner in which subjects are selected for the study • Result of deficiencies in study design • E. g. - Case-control study - exposure status may influence selection of subjects to a different extent in cases and controls - self-selection bias 15

  16. Bias SELECTION BIAS: Any systematic error that arises in the process of identifying the two 2 study groups to be compared) • Results in the study groups being non-comparable, unless some type of statistical adjustment can be made 16

  17. Selection Bias EXAMPLE:Case Control Study Outcome: Hemorrhagic stroke Exposure: Appetite suppressant products that contain Phenylpropanolamine (PPA) Cases: Persons who experienced a stroke Controls: Persons in the community without stroke Bias: Control subjects were recruited by random-digit dialing from 9:00 AM to 5:00 PM. This resulted in over- representation of unemployed persons who may not represent the study base in terms of use of appetite suppressant products. 17

  18. Selection Bias EXAMPLE:Non-Response • If refusal or non-response is related to exposure, the estimate of effect may be biased. For example, if controls are selected by use of a household survey, non-response may be related to demographic and lifestyle factors associated with employment. • Responders often differ systematically from persons who do not respond. 18

  19. Berkson’s Bias • A form of selection bias that affects hospital-based epidemiology studies. • People in hospital are likely to suffer from multiple diseases and engage in unhealthy behaviours (e.g. smoking) • As a result, they are atypical of the population in the community 19

  20. Healthy Worker Effect • A form of selection bias that affects epidemiology studies of workers. • Ill and disabled people are likely to be unemployed. The employed (workers) are healthier than other segments of the population. • As a result, they are atypical of the population in the community 20

  21. Information Bias Definition: Systematic differences in the way in which data on exposure and outcome are obtained from the various study groups. Some Types/Sources of Information Bias: • Bias in abstracting records • Bias in interviewing • Bias from surrogate interviews • Surveillance bias • Reporting and recall bias 21

  22. Information Bias • Results from systematic differences in the way data on exposure or outcome are obtained • May result from measurement defects or questionnaires or interviews that do not measure what they claim to • Examples of information bias • Recall bias : self-reported information may be inaccurate due to low levels of recall 22

  23. Recall Bias DEFINITION: Study group participants systematically differ in the way data on exposure or outcome are recalled • Particularly problematic in case-control studies • Individuals who have experienced a disease or adverse health outcome may tend to think about possible “causes” of the outcome. This can lead to differential recall 23

  24. Recall Bias - Example Outcome: Cleft palate Exposure: Systemic infection during pregnancy Cases: Mothers giving birth to children with cleft palate Controls: Mothers giving birth to children free of cleft palate Bias: Mothers who have given birth to a child with cleft palate may recall more thoroughly colds and other infections experienced during pregnancy 24

  25. Interviewer Bias DEFINITION: Systematic difference in the soliciting, recording, or interpretation of information from study participants • Can affect every type of epidemiologic study • May occur when interviewers are not “blinded” to exposure or outcome status of participants. 25

  26. Interviewer Bias • Interviewer’s knowledge of subjects’ disease status may result in differential probing of exposure history • Similarly, interviewer’s knowledge of subjects’ exposure history may result in differential probing and recording of the outcome under examination • Placebo control is one method used to maintain observer blindness in randomized trials. 26

  27. Reporting Bias DEFINITION: Selective suppression or revealing of information such as past history of sexually transmitted disease. • Often occurs because subject reluctance to report an exposure due to attitudes, beliefs, and perceptions • “Wish bias” may occur among subjects who have developed a disease and seek to show that the disease “is not their fault.” 27

  28. Surveillance Bias • If a population is monitored over a period of time, disease ascertainment may be better in the monitored population than in the general population (“surveillance bias”). • May lead to biased estimate of exposure/disease relationship. 28

  29. Misclassification Bias DEFINITION: Erroneous classification of the exposure or disease status of an individual into a category to which it should not be assigned Misclassification of the exposure or outcome Example: --- Cases incorrectly classified as controls --- Controls incorrectly classified as cases --- Exposed incorrectly classified as non-exposed --- Non-exposed incorrectly classified as exposed 29

  30. Control of Bias Can only be prevented and controlled during the design and conduct of a study • Choice of a study population • Methods of data collection • Sources of case ascertainment and risk factor information Sever 30

  31. Good Study Design Protects Against All Forms of Error 31

More Related