1 / 20

HR: Validity and Reliability Threats to validity/Sources of error in Design and methods

HR: Validity and Reliability Threats to validity/Sources of error in Design and methods . Tips, tools, and rules to understand validity and reliability issues. Dr Otis L Stanley. Valid verses Reliable. A shooting example: Consider a shooter who is practicing.

jolie
Download Presentation

HR: Validity and Reliability Threats to validity/Sources of error in Design and methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HR: Validity and ReliabilityThreats to validity/Sources of error in Design and methods Tips, tools, and rules to understand validity and reliability issues Dr Otis L Stanley

  2. Valid verses Reliable A shooting example: Consider a shooter who is practicing. • If with multiple shots, he is all over the target - His shooting is neither valid (accurate) or reliable. • If he has a tight pattern but he shots are to the lower left of center, he is shooting reliably but not accurately (validly). Note: 1. and 2. represent different problems (the shooter or gun, the sights) • If his pattern is tight and on the center, his shooting is both valid and reliable – he is ready to go to the field.

  3. Validity and Reliability • The concepts of valid and reliable have the same meaning for the various research disciplines, and for qualitative and quantitative studies- but approaches, measures, and terms are sometimes different. • Consistency in approach and appropriateness of conclusions often start with the question asked and the ways it can be answered. • Answers to a value question may not be easily coded or quantifiably analyzed. • An opinion question may have no objective right answer but be quantifiable. • A factual question may not be readily or exactly measured. • As used in health research, validity means measuring what we intend to measure, and reliability means doing it consistently. Appropriate inferences and conclusions depend on valid and reliable results.

  4. When Validity, When Reliability? • Objective measures, questions, or tests are typically considered for validity concerns. Is it a good measure, question or test for what is being measured, questioned, or tested? • Subjective measures, questions, or tests are typically considered for reliability concerns. Is the measure stable, repeatable, consistent, dependable over time or wording? +/- But an objective measure may also be unreliable, and a subjective measure can be invalid.

  5. Another View:Classical Measurement Theory

  6. Considered from Measurement Theory

  7. Validity Getting the right answer.

  8. Conceptions of Validity • In most health research, validity is usually considered as “threats to validity” –bias, confounding, and interaction are how such threats are classified. • Validity can also be considered as external (generalizability) or internal (study). • Social research often classify validity as: • Theory-related validity • Face validity • Content validity • Construct validity • Criterion-related validity • Concurrent validity • Predictive validity Much of Health Research is less theory bound, but understanding these types of validity can improve the quality of survey items/indicators.

  9. Theory-related Validity • Face validity – Does the measuring instrument ask the intended question or test for the correct indicator? - Panel of experts may determine if the measure is in question. • Content validity (observable) - Does the measure appropriately and thoroughly measure what is intended? – Does a PPD or Tine skin test accurately test for tuberculosis? • Construct validity (unobservable) - Do(es) an assumed measure(s) for an indicator actually measure the indicator (construct)? - No one objective measure fits the concept/construct, so does the tool actually measure the concept of interest? Factor analysis was developed to see if components fit together for the concept (i.e. stress, health).

  10. Criterion-related Validity • Concurrent Validity Measure two variables and correlate them to demonstrate that measure 1 is measuring the same thing as measure 2 –same point in time. Example: GRE score and GPA • Predictive Validity Measure two variables, one now and one in the future, correlate them to demonstrate that measure 1 is predictive of measure 2, something in the future. Example: Serum cholesterol and heart disease

  11. Design Validity Does the research design allow the investigator to honestly answer their research question? (Threats of internal and external validity) Test Validity Does the test (or instrument) measure what it is supposed to measure? (For content validity, a sensitivity and specificity question) Other Considerations of Validity?

  12. Threats to Validity Internal Validity External Validity • Bias potential • Selection bias (sampling and group assignment) • Information bias (differential ascertainment/misclassification) • Covariates as Confounders – additional variables that confuse the primary relationship between major study variables • Interaction / Effect Modification- synergy or antagonism • Generalizability • Does the study situation reflect that of the target population?- Sampling bias • Almost always an issue of a representative sample. • Homogeneous subjects can increase precision, but can decrease external validity.

  13. Measures of Test Validity:Sensitivity, Specificity, & Predictive Values: When a gold standard exists: • Sensitivity tells the proportion a positive test will capture (a/a+c) • Specificity tells the proportion a negative test will capture (d/b+d) • Positive predictive value tells the proportion of disease in the positive test group (a/a+b) • Negative predictive value tells the proportion of nodisease in the negative test group (d/c+d)

  14. A Few Validity Check Tips • External Validity- Use representative, random samples. • Internal Validity: • Selection Bias- Treat selection and group assignment alike. • Information Bias- “Blind” researchers, like ascertainment. • Confounding- In observational studies, may capture information on potential confounders/covariates for control in analysis, or match subjects, or stratify into groups by confounder. Confounders are not eliminated, just controlled. • Interaction- Consider synergy and antagonism between variables. • Design type: Purposeful (qualitative) designs are prone to various biases; Case-Control studies are prone to selection and information (recall) biases; Clinical trials (experiments) are prone to external validity issues. Bias can occur in all designs. • Content and Construct Validity- How good is my instrument, tool, or measure?

  15. Reliability Getting consistent results.

  16. Reliability • Reliability considers the repeatability of a thing (measure or question). • Early electric thermometers were not reliable -lots of “slop” (random error) in readings. Modern digital thermometers are very reliable (precise), but they must be calibrated (a validity issue-systemic error). • Two questions: “Do you favor protected sex?” verses “Do you use a rubber (i.e. condom) every time you have sex (i.e. intercourse)?” One will get more reliable answers (but accuracy of the response remains unknown- unless you can validate)

  17. Instrument Reliability Can you trust the data? What about? • Stability – change over time • Consistency – within item agreement • Rater reliability – rater agreement

  18. Reliability Checking methods • Test-retest reliability (for stability) • Pearson product moment correlations • Cronbach’s alpha (for consistency) – one point in time, measures inter-item correlations, or agreements. • Rater reliability (rater agreement) • Inter-rater reliability Cohen’s kappa • Intra-rater reliability Scott’s pi

  19. Reliability in Research • Reliability should be checked before a tool (like a questionnaire) is fielded for a major study. • This is often done for opinion surveys. • Since an opinion or subjective value is being asked, accuracy (validity) is not in question. • An objective question should still be reliable, but we also must consider its validity (accuracy). A very reliable but inaccurate response (like a mis-calibrated thermometer) provides consistently inaccurate information (i.e. Bias).

  20. Some Additional SC Lectures on the topic: Validity and Reliability -Wangsuphachart Potential Errors –Shawky Variation: role of error, bias and confounding -Bhopal

More Related