1 / 36

Designing Case Forms with Validity in Mind

Designing Case Forms with Validity in Mind. Louis A. Morris, Ph.D. Barnett International December 3, 2002. “Experience never errs; it is only your judgment that errs in promising itself results [that] are not caused by your experiments”.

ronni
Download Presentation

Designing Case Forms with Validity in Mind

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Designing Case Forms with Validity in Mind Louis A. Morris, Ph.D. Barnett International December 3, 2002

  2. “Experience never errs; it is only your judgment that errs in promising itself results [that] are not caused by your experiments”

  3. “Experience never errs; it is only your judgment that errs in promising itself results [that] are not caused by your experiments” Leonardo da Vinci

  4. What is Bias? • Systematic Variation from the Truth • Opposite of Fair, Not Random Variation • Selection Bias • effect of the treatment is confounded with pre-existing differences in the treated and control groups • Confirmation bias • one tends to notice what confirms one's beliefs and to ignore, or undervalue what contradicts one's beliefs • Demand Characteristics, Observer Bias, Social Desirability Effects, Response Artifacts, Volunteering Effects, Evaluation Apprehension, Hawthorne Effects, Sensitization Effects, …. Question-Asking Bias

  5. Objectives • When do we need to be concerned about question-asking bias? • Why is it important to understand how people interpret: • Words, Questions, Questionnaires? • What are the psychometric properties of questionnaire scales that determine measurement acceptance? • Validity, Reliability, Sensitivity, Responsiveness, and Minimally Important Difference

  6. Potential For Bias Impacts • Drug Approvals & Labeling • Judgmental Responses (Symptoms, Mood/Emotion, Subjective Evaluations) • Patient Reported Outcomes • Quality of Life, Satisfaction, Productivity, Drug-Specific Outcomes (e.g., bothered by facial hair, ease of use of inhaler) • Advertising & Promotional Claims • Patient/Consumer Studies • DTC promotion drug benefits not just clinical effects • OTC Drugs Switch Approvals • Comprehension • Actual Use and Effects Questions – Answers - Information

  7. Outcomes Assessment Sources and Examples Clinician - Reported Patient - Reported Physiological Caregiver - Reported For example, Global impressions Observation & tests of function For example, FEV1 HbA1c Tumor size For example, Dependency Functional status For example, Functional status Symptoms HRQL….Satisfaction Evaluation Criteria Perceptions, Linkages Global Impression Well-being Treatment adherence

  8. “Outcomes” claims classification ADL QALYs Cost PROs Bother Discomfort HRQL Symptoms Satisfaction Productivity Meyer, Burke, 1999

  9. Can Biased Questions Affect Drug Approval? • clear statement of the objectives • the proposed or actual methods of analysis • valid comparison with a control • protocol for the study and report of results should describe the study design precisely • subjects … adequate assurance that they have the disease or condition being studied • method of assigning patients to treatment and control groups minimizes bias and is intended to assure comparability of the groups (21 CFR 314.126) Adequate and Well Controlled Studies

  10. Adequate and Well Controlled Studies (2) • Adequate measures are taken to minimize bias on the part of the subjects, observers, and analysis of the data • methods of assessment of subjects' response are well-defined and reliable. • report of the study should describe the results and the analytic methods

  11. Sources of Question-Asking Bias • Individual Words • Word Interpretation • Individual Question • Question Phrasing: Leading, Framing, • Question Asking “Environment” • Placement in Questionnaire, Broader Atmosphere • Series of Questions • Scale Validity • Other Psychometric qualities

  12. Remember the comic strip Dagwood? • his wife • his boss • his pet

  13. Question Interpretation • I got in late last night…. • Are you tired? Yes No • Last night I was reading about how chemotherapy causes anemia which has profound physical effects on cancer patients…. • Are you tired? Yes No

  14. Context Dependency • How tall is the Statue of Liberty? • How important is Freedom? • Some concepts are vivid and invariant • Context-independent • Some concepts vary in interpretation based on context • Context-dependent

  15. What provides Context? • Respondent look for “cues” • Question itself • How fast were the cars going when they • Bumped?, Collided?, Crashed?, Smashed? • Previous Questions • Presumed Intent • Question-asking as a Linguistic Conversation • This is an experiment (Demand Characteristics)

  16. Classify each of these as a vegetable or an animal: Carrot Beagle Daisy

  17. Bias Also Due To: • Memory • Recency Effects (Daisy Question) • Judgment – Decision Making • Decision Heuristics (framing, anchoring and adjustment, etc.) Would take a drug where 15% of the people have a serious side effect? Would take a drug where 85% of the people do not have a serious side effect?

  18. What is wrong with this Question? • Given all the painful heartburn you suffered before this drug was available, how well did it work compared to the other treatments available? • Focuses on painful heartburn and suffering (leading) • Multipart – what is question? (cognitive responses) • Even more of a problem for interviewer based and telephone surveys • Does not specify what other treatments (stimulus under evaluation) • Assumes it worked (social desirability effects) • Postulate the negative (how well did it work or not work)

  19. Types of Questions/Data (1) • Nominal or Categorical • Are you tired? • Ordinal • Rate how tired you feel • Not at all, A little bit, Somewhat, Extremely • Four point, Likert Scale What type of questions do we ask? depends on type of data desired

  20. Types of Questions/Data (2) • Interval/Ratio • How tired do you feel (select one): • I have no physical sensations of tiredness • My eyes are closing a little bit • My head is slumping • My whole body is limp • Thurstone Scale Are some scale forms less bias than other forms?

  21. Different Forms, Different Biases • How tired do you feel (select one): • I am not at all tired • My eyes are closing a little bit • I cannot concentrate • I want to get out of this lecture • Vague, multiple interpretations • Physical Issues and Cognitive Issues • Confuses Tiredness with Boredom

  22. Designing Questionnaires • If initial questions influence others: • Recall – Open-Ended relies on memory • Unaided • What did the leaflet say or suggest about the drug? • Aided • What did the leaflet say or suggest were the side effects of the drug? • Recognition • Was sleepiness mentioned or not mentioned as a side effect of the drug? • Demographics at end • Invariant interpretation The Funnel Design

  23. Question-Asking as an Artifact • Do we randomize or vary the order of questions? • Yes, Comprehension Test – Side Effect Knowledge • Avoid order bias, seek robust results, test knowledge • No, Clinical Trial – Assess Tiredness with Scale • Want invariant interpretation of terms, want “bias” to be the same across administrations • Three stages of an artifact: (McGuire) • Ignorance, Coping, Exploitation

  24. Scale Psychometrics • Validity • Reliability • Sensitivity • Responsiveness • Minimally Important Difference

  25. Scale - Tiredness • Measures a complex concept • Fair sampling of items Amount Cognitive--------Physical

  26. How do we know we are measuring what we want to measure? • Validity - process not a characteristic - Understanding what is measured • Face Validity - examine items • Content Validity - coverage • Construct Validity - any theory? • Concurrent Validity - positive correlation • Discriminative Validity - negative correlation

  27. FACT-Fatigue Subscale • I feel fatigued – face validity • I feel weak • I feel listless (washed out) • I feel tired • I have trouble starting things because I am tired – (cognitive) • I have trouble finishing things because I am tired • I have energy – the opposite, negate yea saying, halo effect • I need sleep during the day - outcome • I am too tired to eat • I need help doing my usual activities – (social) • [Plus others] Correlate with Red Blood Cell Levels - Anemia

  28. How do we know we are measuring a concept consistently? • Reliability • Within the same scale • inter-item • Over time • test-retest

  29. How do we know that our measures can pick up differences that actually exits? • Sensitivity • Type of scale items • I am not tired at all • I am so tired I must go to sleep right now • I am having trouble concentrating • I am feeling a little bit sleepy 2 items

  30. How do we know that our measure corresponds to changes in the variable in question? • Responsiveness • Correlate changes in direct measure of clinical outcome (Red Blood Cell Count) with changes on scale (tiredness).

  31. How do we know that an observed effect is clinically meaningful? • Minimally Important Difference • Smallest scale difference judged to be meaningful (e.g., where a change in therapy would be warranted) • Effect Size

  32. Developing a Scale? Boredom Scale • Conceptual - what is boredom? • Other scales? – literature, experts • Boredom effects, validation methods • Operational - Item Generation/Reduction/Validation • modular, adaptation from general scales • focus groups • initial question design (scale measure type) • ratings: How important are each of these items? • Factors Analysis - how many dimensions? • Validation studies - psychometrics • Clinical Impact (substantiation studies)

  33. Selecting a Scale • Practical Aspects - will people fill it out? • Number, complexity of items • “Involvement” with scale • Bias - • Leading questions, socially desirability, yea-saying, etc. • Validity in my population?: pilot

  34. Conclusions • Science is a process • Use observation to obtain the truth • Question-Asking Bias – not responding to question content • Validation – not determining the truth • Bias as an Artifact • Cannot rule it out, Known/Unknown • Seek to understand and ask a fair set of questions • Always tradeoffs • Understand measurement goals to determine best approach

More Related