role of evaluation in policy development and implementation
Download
Skip this Video
Download Presentation
ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION

Loading in 2 Seconds...

play fullscreen
1 / 89

ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION - PowerPoint PPT Presentation


  • 212 Views
  • Uploaded on

ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION. PRESENTATION BY ARDEN HANDLER, DrPH April 10, 2002. RELATIONSHIP BETWEEN EVALUATION AND POLICY. Multiple program evaluations can lead to the development of policy

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION' - ziazan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
role of evaluation in policy development and implementation
ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION

PRESENTATION BY ARDEN HANDLER, DrPH

April 10, 2002

relationship between evaluation and policy
RELATIONSHIP BETWEEN EVALUATION AND POLICY
  • Multiple program evaluations can lead to the development of policy
  • Programs are often thought of as the expressions of policy so when we do program evaluation, we may be in fact evaluating a policy (e.g., Head Start = day care policy for low-income children)
relationship between evaluation and policy3
RELATIONSHIP BETWEEN EVALUATION AND POLICY
  • Population-based programs (e.g., Medicaid) are often thought of as a policy; when we are evaluating population-based programs we are usually using program evaluation methods to examine a policy
program evaluation versus policy analysis
PROGRAM EVALUATION VERSUS POLICY ANALYSIS
  • Program evaluation uses research designs with explicit designation of comparison groups to determine effectiveness
program evaluation versus policy analysis5
PROGRAM EVALUATION VERSUS POLICY ANALYSIS
  • Policy analysis uses a variety of different frameworks to answer one or more questions about a policy:
    • HISTORICAL FRAMEWORK
    • VALUATIVE FRAMEWORK
    • FEASIBILITY FRAMEWORK
program evaluation versus policy analysis6
PROGRAM EVALUATION VERSUS POLICY ANALYSIS
  • Policy analysis often relies on policy/program evaluations
  • The tools of program evaluation can be used to evaluate the effectiveness of policies;
    • However, this is not policy analysis
purposes of evaluation evaluation questions
Purposes of Evaluation/ Evaluation Questions
  • Produce information in order to enhance management decision-making
  • Improve program operations
  • Maximize benefits to clients: to what extent and how well was the policy/program implemented?
purposes of evaluation evaluation questions8
Purposes of Evaluation/ Evaluation Questions
  • Assess systematically the impact of programs/policies on the problems they are designed to ameliorate
    • How well did the program/policy work?
    • Was the program worth its costs?
    • What is the impact of the program/policy on the community?
two main types of evaluation
Two Main Types Of Evaluation
  • Process or formative
  • Outcome or summative
process or formative evaluation
Process or Formative Evaluation
  • Did the program/policy meet its process objectives?
  • Was the program/policy implemented as planned?
  • What were the type and volume of services provided?
  • Who was served among the population at risk?
why do we do process evaluation
Why Do We Do Process Evaluation?
  • Process evaluation describes the policy/program and the general environment in which it operates, including:
      • Which services are being delivered
      • Who delivers the services
      • Who are the persons served
      • The costs involved
why do we do process evaluation12
Why Do We Do Process Evaluation?
  • Process evaluation as program monitoring
    • Charts progress towards achievement of objectives
      • Systematically compares data generated by the program with targets set by the program in its objectives
why do we do process evaluation13
Why Do We Do Process Evaluation?
  • Process evaluation
    • Provides feedback to the administrator regarding the program
    • Allows others to replicate the program if program looks attractive
    • Provides info to the outcome evaluation about program implementation and helps explain findings
outcome or summative evaluation
Outcome or Summative Evaluation
  • Did the program/policy meet its outcome objectives/goals?
  • Did the program/policy make a difference?
outcome or summative evaluation15
Outcome or Summative Evaluation
  • What change occurred in the population participating in or affected by the program/policy?
  • What are the intended and unintended consequences of this program/policy?
    • Requires a comparison group to judge success
outcome or summative evaluation16
Outcome or Summative Evaluation
  • What impact did the program/policy have on the target community?
    • Requires information about coverage
why do we do outcome evaluation
Why Do We Do Outcome Evaluation?
  • We want to know if what we are doing works better than nothing at all
  • We want to know if what we are doing new works better than what we usually do
why do we do outcome evaluation18
Why Do We Do Outcome Evaluation?
  • Which of two or more programs/policies work better?
  • We want to know if we are doing what we are doing efficiently
what kind of outcomes should we focus on
What Kind of Outcomes Should We Focus on?
  • Outcomes which can clearly be attributed to the program/policy
  • Outcomes which are sensitive to change and intervention
  • Outcomes which are realistic; can the outcomes be achieved in the time frame of the evaluation?
efficiency analysis
Efficiency Analysis
  • Once outcomes have been selected and measured, an extension of outcome evaluation is efficiency analysis:
    • cost-efficiency
    • cost-effectiveness
    • cost-benefit
evaluation success
Evaluation Success

Whether an evaluation will demonstrate a positive impact of a policy or program depends on other phases of the planning process as well as on adequate evaluation designand data collection

evaluation success22
Evaluation Success

Whether an evaluation will show a positive effect of a policy/program depends on:

  • Adequate Program Theory
  • Adequate Program Implementation
  • Adequate Program Evaluation
program theory
Program Theory
  • What is a program’s theory?
    • Plausible model of how program/policy works
    • Demonstrates cause and effect relationships
    • Shows links between a program’s/policy’s inputs, processes and outcomes
means ends hierarchy m q patton
Means-ends HierarchyM.Q. Patton
  • Program theory links the program means to the program ends
  • Theory of Action
means ends hierarchy m q patton26
Means-ends HierarchyM.Q. Patton

Constructing a causal chain of events forces us to make explicit the assumptions of the program/policy

  • What series of activities must take place before we can expect that any impact will result?
theory failure
Theory Failure
  • Evaluations may fail to find a positive impact if program/policy theory is incorrect
    • The program/policy is not causally linked with the hypothesized outcomes (sometimes because true cause of problem not identified)
theory failure28
Theory Failure
  • Evaluation may fail to find a positive impact if program/policy theory is not sufficiently detailed to allow for the development of a program plan adequate to activate the causal chain from intervention to outcomes
theory failure29
Theory Failure
  • Evaluation may fail if program/policy was not targeted for an appropriate population (theory about who will benefit is incorrect)

These 3 issues are usually under control of those designing the program/policy

other reasons why evaluations may demonstrate no policy program effect
Other Reasons Why Evaluations May Demonstrate No Policy/Program Effect

Program/policy failure

program policy failure
Program/policy Failure
  • Program/policy goals and objectives were not fully specified during the planning process
other program reasons
Other Program Reasons
  • Program/policy was not fully delivered
  • Program/policy delivery did not adhere to the specified protocol
other program reasons33
Other Program Reasons
  • Delivery of treatment deteriorated during program implementation
  • Program/policy resources were inadequate (may explain above)
other program reasons34
Other Program Reasons
  • Program/policy delivered under prior experimental conditions was not representative of treatment able to be delivered in practice
    • e.g., Translation from university to "real" setting or from pilot to full state program
non program reasons why evaluations may demonstrate no program effect
Non-Program Reasons Why Evaluations May Demonstrate No Program Effect
  • Evaluation Design And Plan
      • Are design and methods used appropriate for the questions being asked?
      • Is design free of bias?
      • Is measurement reliable and valid?
conducting an outcome evaluation
Conducting an Outcome Evaluation

How do we choose the appropriate evaluation design to assess system, service, program or policy effectiveness?

tools for assessing effectiveness
Tools for Assessing Effectiveness
  • Multiple paradigms exist for examining system, service, program, and policy effectiveness
  • Each has unique rhetoric and analytic tools which ultimately provide the same answers
tools for assessing effectiveness38
Tools for Assessing Effectiveness
  • Epidemiology
    • e.g., Are cases less likely to have had exposure to the program than controls?
  • Health Services Research
    • e.g., Does differential utilization of services by enrollees and non-enrollees lead to differential outcomes?
tools for assessing effectiveness39
Tools for Assessing Effectiveness
  • Evaluation Research
    • e.g., Are outcomes for individuals in the program (intervention) different than those in the comparison or control group?
mix and match
Mix and Match

The evaluator uses a mix and match of Methods/Paradigms

depending on
Depending on:
  • Whether program/service/policy and/or system change covers:
    • entire target population in state/city/county
    • entire target population in several counties/community areas
depending on42
Depending on:

2. Whether program/service/policy and/or system change includes an evaluation component at initiation

3. Whether adequate resources are available for evaluation

4. Whether it is ethical/possible to manipulate exposure to the intervention

outcome evaluation strategies
Outcome Evaluation Strategies
  • Questions to consider:

Is service/program/policy and/or intervention population based or individually based?

outcome evaluation strategies44
Outcome Evaluation Strategies
  • Population Based --e.g., Title V, Title X, Medicaid
      • from the point of view of evaluation, these programs/policies can be considered “universal” since all individuals of a certain eligibility status are entitled to receive services
outcome evaluation strategies45
Outcome Evaluation Strategies
  • Population Based: issues
    • With coverage aimed at entire population, who is the comparison or the unexposed group?
    • What are the differences between eligibles served and not served which may affect outcomes? Between eligibles and ineligibles?
outcome evaluation strategies46
Outcome Evaluation Strategies
  • Population Based: issues
    • How do we determine the extent of program exposure or coverage? (need population-based denominators and quality program data)
outcome evaluation strategies47
Outcome Evaluation Strategies
  • Population Based: issues
    • Which measures to use?
      • Measures are typically derived from population data sets e.g., Medicaid claims data, surveillance, vital records, census data
outcome evaluation strategies48
Outcome Evaluation Strategies
  • Individually Based
    • e.g., Aids/sex education program in two schools; smoking cessation program in two county clinics
  • Traditional evaluation strategies can be more readily used/designs are more straightforward
outcome evaluation strategies49
Outcome Evaluation Strategies
  • Questions to Consider:
    • Is the evaluation Prospective or Retrospective?
        • Retrospective design limits options for measurement and for selection of comparison groups
        • Prospective design requires evaluation resources committed up front
outcome evaluation strategies50
Outcome Evaluation Strategies
  • Questions to consider
    • Which design to choose?
      • Experimental, quasi-experimental, case-control, retrospective cohort?
    • What biases are inherent in one design versus another?
    • What are the trade-offs and costs?
outcome evaluation strategies51
Outcome Evaluation Strategies
  • Questions to consider
    • Measurement
      • What will be measured?
      • Who will be measured?
      • How?
evaluation designs
Evaluation Designs
  • Experimental Designs
    • Clinical Trials in Epidemiology
  • Quasi-Experimental Designs
  • Observational Epidemiologic Designs
evaluation designs54
Evaluation Designs
  • Experimental
    • Use of Random Assignment to obtain treatment/intervention and comparison groups
      • Allows the evaluator to select samples that are comparable within limits of sampling error
evaluation designs55
Evaluation Designs
  • Experimental
    • Use of Random Assignment
      • Potential confounders are theoretically equivalent between groups; however, with small numbers and wide variability on certain variables, sometimes non-equivalence occurs
evaluation designs56
Evaluation Designs
  • Experimental
    • Use of Random Assignment
      • Does not guarantee that initial comparability between groups will be maintained; need to test if attrition from groups is differential
evaluation designs57
Evaluation Designs
  • Experimental
    • Use of Random Assignment
      • Pretest-Posttest Control Group Design (Workhorse)
      • Posttest-Only Control Group Design
      • Solomon Four Group Design
evaluation designs58
Evaluation Designs
  • Experimental
    • Random Assignment Is Not Easily Achieved In The Real World
evaluation designs59
Evaluation Designs
  • How To Overcome Objections To Random Assignment:
    • Two new programs strategy
    • Delayed program strategy
    • Borderline strategy:
    • When lottery is already expected for selection
    • When demand outstrips supply; use of waiting lists
evaluation designs60
Evaluation Designs
  • Alternatives To Random Assignment
    • Non-Experimental Designs
evaluation designs61
Evaluation Designs
  • One Group Posttest Only Design (one shot case study)
  • One Group Pretest Posttest Only Design
  • Posttest Only Design with Non-equivalent groups (static group comparison)
evaluation designs62
Evaluation Designs
  • Alternatives To Random Assignment
      • Quasi-Experimental
      • Using non-equivalent comparison groups (but exposure is “manipulated”)
        • Note: Often constructed after the fact/ manipulation of exposure is retrospective
evaluation designs63
Evaluation Designs
  • Quasi-Experimental
      • Nonequivalent control group design with pretest and posttest (plus modifications on this design)
      • Time Series
      • Multiple Time Series
      • Institutional Cycle Design
evaluation designs64
Evaluation Designs

Observational Epidemiologic Designs:

Historically, these designs were reserved exclusively for examining risk factors (observed exposures) and outcomes

evaluation designs65
Evaluation Designs
  • Epidemiologic Designs
    • Case-Control
      • Sampling from outcome (disease)
    • Cohort
      • Sampling from exposure
evaluation designs66
Evaluation Designs
  • Epidemiologic Designs
    • Cross-Sectional
      • Sampling from the entire population (same point in time)
    • Ecologic
      • Exposure and outcome cannot be linked at the individual level
evaluation designs67
Evaluation Designs
  • When using epidemiologic designs for program evaluation, need to reconsider what distinguishes these designs from experimental and quasi-experimental designs
evaluation designs68
Evaluation Designs

Manipulation of Exposure

  • When using observational epidemiologic designs for program evaluation, we de facto accept that there has been manipulation of exposure (the program/policy)
evaluation designs69
Evaluation Designs

Epidemiologic studies involve a measure of exposure at one point in time and a measure of outcome at one point in time

evaluation designs70
Evaluation Designs

Experimental and quasi-experimental studies can potentially include three measures:

  • a measure of exposure to the intervention at one point in time
  • measure of the outcome (at two points in time, pre and posttest)
evaluation designs71
Evaluation Designs
  • Experimental/quasi-experimental designs ask:
    • Does the intervention change the outcome of the individuals being studied?
evaluation designs72
Evaluation Designs
  • Epidemiologic designs ask:
    • Does the intervention have an impact on whether or not an outcome occurs?
evaluation designs73
Evaluation Designs

Example #1

  • WIC food purchasing workshop (outcome is food purchase choices) versus WIC breastfeeding workshop for first time pregnant women (outcome is breastfeeding in this group of women)
evaluation designs74
Evaluation Designs

Example #2

  • Impact of a teenage pregnancy prevention program on knowledge, attitudes and behavior versus on pregnancy rates
evaluation designs75
Evaluation Designs

When studying a health status outcome, we can use community data or data from another group of individuals as a type of baseline

evaluation designs76
Evaluation Designs
  • e.g., can compare the LBW rates in the population pre and post introduction of case-management, or can compare the lbw rates in two communities
evaluation designs77
Evaluation Designs

Summary

Epidemiology designs are used when baseline/pretest measures are not theoretically possible

evaluation designs78
Evaluation Designs

However, methods to adjust for possible differences between groups on factors other than the outcome make epidemiologic observational designs very robust

evaluation designs79
Evaluation Designs

Summary

  • Most designs in the real world are hybrids
  • The major goal is to construct a design which is free of bias
comparison group selection
Comparison Group Selection
  • Individual Based Program
    • Individuals who drop out of treatment or program
    • Individuals getting standard program (usual care)
comparison group selection81
Comparison Group Selection
  • Individual Based Program
    • Individuals attending classes in another school, attending a clinic or other program in another health department or community agency
    • Individuals on waiting lists for the program
comparison group selection82
Comparison Group Selection

Population Based Program/Policy

  • More difficult to find comparison group if entire eligible population is covered
  • If possible, find those who are eligible but did not access program, although selection bias likely to be operative
comparison group selection83
Comparison Group Selection

Population Based Program/Policy

  • Can examine outcomes for the population pre and post program/policy implementation; at both time points can compare to another population
    • e.g., the uninsured, those on Medicaid
comparison group selection84
Comparison Group Selection

Population Based Program/Policy

  • Compare two risk ratios
  • Can use a priori hypotheses about how these should change after the intervention
measurement failure
Measurement Failure
  • Flawed evaluation design
  • Poor validity and reliability of measurement
  • Biased/unreliable data collection procedures (including sampling)
summary evaluation designs
Summary- Evaluation Designs

Evaluations most often benefit from a multi-method and multi-source data approach

who uses program evaluation
Who Uses Program Evaluation?
  • Program Managers/Administrators
  • Program Planners
  • Program Funders (Private, Public)
  • Legislators and Other Policy-makers
to what end
To What End?
  • To develop new programs/policies
  • To refine programs/policies
  • To terminate programs/policies
  • To conduct policy analysis
summary
Summary

However, the ability to translate evaluation findings into good programs and policy does not only depend on quality data, but on political will

ad