780 likes | 1.1k Views
Evaluating Health Communications Programs. Luann D’Ambrosio , MEd Associate Director, NWCPHP Clinical Instructor, Health Services, UW. Measuring the Effects of Programs. NYC soda ban would lead customers to consume more sugary drinks, study suggests.
E N D
Evaluating Health Communications Programs Luann D’Ambrosio, MEd Associate Director, NWCPHP Clinical Instructor, Health Services, UW
NYC soda ban would lead customers to consume more sugary drinks, study suggests http://www.cbsnews.com/8301-204_162-57579172/nyc-soda-ban-would-lead-customers-to-consume-more-sugary-drinks-study-suggests/
Interesting Example of… Potential for unintended intervention effects Importance of study design Simulation study with 100 undergraduates Exposed to different menu choices Classic, bundle, small only Asked what they “would” purchase No mention of bans/restrictions on soda size Wilsom BM et al. PLOS One 2013;8(4):e61081
Learning Objectives • Understand the basic components of program evaluation. • Understand the various types of study designs useful methods in program evaluation. • Understand the concepts of measurement validity and reliability.
What is Program Evaluation? “a process that attempts to determine as systematically and objectively as possible the relevance, effectiveness, and impact of activities” A Dictionary of Epidemiology, 2008
Environment and organizational context Decision-making Best available research evidence Resources, including practitioner expertise Population characteristics, needs, values, and preferences
Evaluation Question(s) What treatment for what population delivered by whom under what conditions for what outcome is most effective, and how did it come about? Paul G, as cited in Glanz et al. (eds) Health Behavior and Health Education (pp. 493). 2008: Jossey-Bass; San Francisco, CA.
The Big Questions… Whyevaluate health communications programs? Why not? Whenshould you start thinking about evaluation?
Do You Have… A research/evaluation person on staff Time and other resources Staff to assist Partners with these resources From Mattessich, 2003
Standards Utility Accuracy Propriety Feasibility
Engaging Stakeholders Buy-in Value of evaluation activities Procedures you will use Permission Data collection & judgment Do not ask forgiveness later!
Potential Stakeholders Who are the stakeholders for your programs? What are some of their key questions/ outcomes?
Health Communications Programs can be Evaluated If… There are clear, measurable intended effects: Program delivery (process) Short-term outcomes (impact) Long-term outcomes (outcome)
Health Promotion Programs can be Evaluated If… There are specific indicators of program success: Program delivery/ participation/uptake Health behavior change Health change
Develop SMART Objectives Specific: Concrete, detailed, and well defined so that you know where you are going and what to expect when you arrive Measureable: Numbers and quantities provide means of measurement and comparison Achievable: feasible and easy to put into action Realistic: Considers constraints such as resources, personnel, cost, and time frame Time-Bound: A time frame helps to set boundaries around the objective
Who Defines Program Success? Many definitions possible for any one program Evaluators need to know all of them! Often must prioritize questions to be answered
Program Evaluation Designs Evidence Quality
Define the Population Who? Community Organization Individuals Mix How many? Number want to serve Number can reach with resources Number needed for reliable results
Evaluation Phases/Types Needs/resources assessment Formative research Process evaluation Impact evaluation Outcome evaluation
Needs/Resources Assessment • Program design or selection • What are the health needs of the community? • What resources are available to meet the needs? • Key informant interviews • Secondary analyses Critical Questions Phase Example Strategies
Formative Research • Program design and on an as-needed basis • What form should intervention take? • How can intervention be improved? • Key informant interviews • Focus groups Critical Questions Phase Example Strategies
Evaluation Types Process Impact Outcome Program - delivery - reach - awareness - satisfaction Behavior/cognition - knowledge gain - attitude change - behavior change - skill development Health - mortality - morbidity - disability - quality of life Adapted from Green et al., 1980
Process Evaluation • Continuous • Is intervention being implemented as planned? • Are targets aware and participating? • Observe & document intervention activities • Survey target audience Critical Questions Phase Example Strategies
Target population Receipt Fit (relevance, satisfaction) Common methods Tracking participation Surveys Key Process/Fidelity Measures Interventionists Training Delivery Common methods Observation Tracking forms
Impact Evaluation • Were program goals met? • Did health behaviors change? • Were there any unintended effects? • Transition point and/or end of intervention • Track policy/ practice changes • Record review • Surveys • Records of adverse eventsSurveys Critical Questions Phase Example Strategies
Research Design Issues Attributing causality v. practicality Randomized Controlled Trials Quasi-experimental designs Non-experimental designs
Population Ineligibility Eligibility Participation No Participation Randomization Intervention Group No Intervention Group Outcome(s) Outcome(s) Example Randomized Control Trial Design
Randomized Controlled Trials The Gold Standard Maximize internal validity Does this intervention work? Increasing controversy Challenges in applied settings Limited external validity Would this intervention work anyplace else? Control Intervention
Employees & visitors, at JHMC Before new smoke-free policy After new Smoke-free policy Cigarettes smoked per day Cigarette remnant counts per day Nicotine concentrations Cigarettes smoked per day Cigarette remnant counts per day Nicotine concentrations Non-experimental Design Example: Pre-test / Post-test, No Control Group Stillmanet al. JAMA 1990;264:1565-1569
Threats to internal validity Selection History Maturation Relatively few challenges Non-experimental Designs Most common Cross-sectional Before-after “Least suitable”
Quasi-experimental Designs Before-after with comparison group Improves some internal validity threats Compromise re: practicality and external validity
Which design would you like to use? Which design is most feasible for you to use? Think about your project/program
Outcome Evaluation • Did the target population’s health or related conditions improve? • End of intervention, and/or follow-up • Mortality • Morbidity • Health care costs Critical Questions Phase Example Measures
Measurement Issues Qualitative & Quantitative approaches Rich data from a few Simpler data from many What’s the key question? Self-report or not? Is the behavior socially desirable? Are other approaches feasible? Observation Records Structure and policies
Measurement Issues Reliability Does the measure reflect true score or error? Validity Does it measure what you think it does? All valid measures are reliable, but not all reliable measures are valid!
Is the measure or design measuring exactly what is intended? What is validity?
Is the measurement consistent? What is reliability?
Consistency Object or phenomenon ? Date #1 (2/20/2009) Date #2 (2/21/2009) Date #3 (2/22/2009) = Observer 1 Observer 2 Same measurement
Validity & Reliability Possibilities Reliable Not valid Low validity Low reliability Not reliable Not valid Both reliable and valid Experiment-Resources.com
Finding Measures Literature/contacting researchers may show you accepted methods and measures Check out existing tools like BRFSS Beware of changing modes Evaluation instruments often need community vetting Participatory methods may prevent use of existing instruments/questions
Describe 2 ways you could (or will) measure your main outcome Think about your project/program