Role of evaluation in policy development and implementation
Download
1 / 89

ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION - PowerPoint PPT Presentation


  • 212 Views
  • Updated On :

ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION. PRESENTATION BY ARDEN HANDLER, DrPH April 10, 2002. RELATIONSHIP BETWEEN EVALUATION AND POLICY. Multiple program evaluations can lead to the development of policy

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION' - ziazan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Role of evaluation in policy development and implementation l.jpg
ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION

PRESENTATION BY ARDEN HANDLER, DrPH

April 10, 2002


Relationship between evaluation and policy l.jpg
RELATIONSHIP BETWEEN EVALUATION AND POLICY

  • Multiple program evaluations can lead to the development of policy

  • Programs are often thought of as the expressions of policy so when we do program evaluation, we may be in fact evaluating a policy (e.g., Head Start = day care policy for low-income children)


Relationship between evaluation and policy3 l.jpg
RELATIONSHIP BETWEEN EVALUATION AND POLICY

  • Population-based programs (e.g., Medicaid) are often thought of as a policy; when we are evaluating population-based programs we are usually using program evaluation methods to examine a policy


Program evaluation versus policy analysis l.jpg
PROGRAM EVALUATION VERSUS POLICY ANALYSIS

  • Program evaluation uses research designs with explicit designation of comparison groups to determine effectiveness


Program evaluation versus policy analysis5 l.jpg
PROGRAM EVALUATION VERSUS POLICY ANALYSIS

  • Policy analysis uses a variety of different frameworks to answer one or more questions about a policy:

    • HISTORICAL FRAMEWORK

    • VALUATIVE FRAMEWORK

    • FEASIBILITY FRAMEWORK


Program evaluation versus policy analysis6 l.jpg
PROGRAM EVALUATION VERSUS POLICY ANALYSIS

  • Policy analysis often relies on policy/program evaluations

  • The tools of program evaluation can be used to evaluate the effectiveness of policies;

    • However, this is not policy analysis


Purposes of evaluation evaluation questions l.jpg
Purposes of Evaluation/ Evaluation Questions

  • Produce information in order to enhance management decision-making

  • Improve program operations

  • Maximize benefits to clients: to what extent and how well was the policy/program implemented?


Purposes of evaluation evaluation questions8 l.jpg
Purposes of Evaluation/ Evaluation Questions

  • Assess systematically the impact of programs/policies on the problems they are designed to ameliorate

    • How well did the program/policy work?

    • Was the program worth its costs?

    • What is the impact of the program/policy on the community?


Two main types of evaluation l.jpg
Two Main Types Of Evaluation

  • Process or formative

  • Outcome or summative


Process or formative evaluation l.jpg
Process or Formative Evaluation

  • Did the program/policy meet its process objectives?

  • Was the program/policy implemented as planned?

  • What were the type and volume of services provided?

  • Who was served among the population at risk?


Why do we do process evaluation l.jpg
Why Do We Do Process Evaluation?

  • Process evaluation describes the policy/program and the general environment in which it operates, including:

    • Which services are being delivered

    • Who delivers the services

    • Who are the persons served

    • The costs involved


Why do we do process evaluation12 l.jpg
Why Do We Do Process Evaluation?

  • Process evaluation as program monitoring

    • Charts progress towards achievement of objectives

      • Systematically compares data generated by the program with targets set by the program in its objectives


Why do we do process evaluation13 l.jpg
Why Do We Do Process Evaluation?

  • Process evaluation

    • Provides feedback to the administrator regarding the program

    • Allows others to replicate the program if program looks attractive

    • Provides info to the outcome evaluation about program implementation and helps explain findings


Outcome or summative evaluation l.jpg
Outcome or Summative Evaluation

  • Did the program/policy meet its outcome objectives/goals?

  • Did the program/policy make a difference?


Outcome or summative evaluation15 l.jpg
Outcome or Summative Evaluation

  • What change occurred in the population participating in or affected by the program/policy?

  • What are the intended and unintended consequences of this program/policy?

    • Requires a comparison group to judge success


Outcome or summative evaluation16 l.jpg
Outcome or Summative Evaluation

  • What impact did the program/policy have on the target community?

    • Requires information about coverage


Why do we do outcome evaluation l.jpg
Why Do We Do Outcome Evaluation?

  • We want to know if what we are doing works better than nothing at all

  • We want to know if what we are doing new works better than what we usually do


Why do we do outcome evaluation18 l.jpg
Why Do We Do Outcome Evaluation?

  • Which of two or more programs/policies work better?

  • We want to know if we are doing what we are doing efficiently


What kind of outcomes should we focus on l.jpg
What Kind of Outcomes Should We Focus on?

  • Outcomes which can clearly be attributed to the program/policy

  • Outcomes which are sensitive to change and intervention

  • Outcomes which are realistic; can the outcomes be achieved in the time frame of the evaluation?


Efficiency analysis l.jpg
Efficiency Analysis

  • Once outcomes have been selected and measured, an extension of outcome evaluation is efficiency analysis:

    • cost-efficiency

    • cost-effectiveness

    • cost-benefit


Evaluation success l.jpg
Evaluation Success

Whether an evaluation will demonstrate a positive impact of a policy or program depends on other phases of the planning process as well as on adequate evaluation designand data collection


Evaluation success22 l.jpg
Evaluation Success

Whether an evaluation will show a positive effect of a policy/program depends on:

  • Adequate Program Theory

  • Adequate Program Implementation

  • Adequate Program Evaluation



Program theory l.jpg
Program Theory

  • What is a program’s theory?

    • Plausible model of how program/policy works

    • Demonstrates cause and effect relationships

    • Shows links between a program’s/policy’s inputs, processes and outcomes


Means ends hierarchy m q patton l.jpg
Means-ends HierarchyM.Q. Patton

  • Program theory links the program means to the program ends

  • Theory of Action


Means ends hierarchy m q patton26 l.jpg
Means-ends HierarchyM.Q. Patton

Constructing a causal chain of events forces us to make explicit the assumptions of the program/policy

  • What series of activities must take place before we can expect that any impact will result?


Theory failure l.jpg
Theory Failure

  • Evaluations may fail to find a positive impact if program/policy theory is incorrect

    • The program/policy is not causally linked with the hypothesized outcomes (sometimes because true cause of problem not identified)


Theory failure28 l.jpg
Theory Failure

  • Evaluation may fail to find a positive impact if program/policy theory is not sufficiently detailed to allow for the development of a program plan adequate to activate the causal chain from intervention to outcomes


Theory failure29 l.jpg
Theory Failure

  • Evaluation may fail if program/policy was not targeted for an appropriate population (theory about who will benefit is incorrect)

    These 3 issues are usually under control of those designing the program/policy


Other reasons why evaluations may demonstrate no policy program effect l.jpg
Other Reasons Why Evaluations May Demonstrate No Policy/Program Effect

Program/policy failure


Program policy failure l.jpg
Program/policy Failure Policy/Program Effect

  • Program/policy goals and objectives were not fully specified during the planning process


Other program reasons l.jpg
Other Program Reasons Policy/Program Effect

  • Program/policy was not fully delivered

  • Program/policy delivery did not adhere to the specified protocol


Other program reasons33 l.jpg
Other Program Reasons Policy/Program Effect

  • Delivery of treatment deteriorated during program implementation

  • Program/policy resources were inadequate (may explain above)


Other program reasons34 l.jpg
Other Program Reasons Policy/Program Effect

  • Program/policy delivered under prior experimental conditions was not representative of treatment able to be delivered in practice

    • e.g., Translation from university to "real" setting or from pilot to full state program


Non program reasons why evaluations may demonstrate no program effect l.jpg
Non-Program Reasons Why Evaluations May Demonstrate No Program Effect

  • Evaluation Design And Plan

    • Are design and methods used appropriate for the questions being asked?

    • Is design free of bias?

    • Is measurement reliable and valid?


Conducting an outcome evaluation l.jpg
Conducting an Outcome Evaluation Program Effect

How do we choose the appropriate evaluation design to assess system, service, program or policy effectiveness?


Tools for assessing effectiveness l.jpg
Tools for Assessing Effectiveness Program Effect

  • Multiple paradigms exist for examining system, service, program, and policy effectiveness

  • Each has unique rhetoric and analytic tools which ultimately provide the same answers


Tools for assessing effectiveness38 l.jpg
Tools for Assessing Effectiveness Program Effect

  • Epidemiology

    • e.g., Are cases less likely to have had exposure to the program than controls?

  • Health Services Research

    • e.g., Does differential utilization of services by enrollees and non-enrollees lead to differential outcomes?


Tools for assessing effectiveness39 l.jpg
Tools for Assessing Effectiveness Program Effect

  • Evaluation Research

    • e.g., Are outcomes for individuals in the program (intervention) different than those in the comparison or control group?


Mix and match l.jpg
Mix and Match Program Effect

The evaluator uses a mix and match of Methods/Paradigms


Depending on l.jpg
Depending on: Program Effect

  • Whether program/service/policy and/or system change covers:

    • entire target population in state/city/county

    • entire target population in several counties/community areas


Depending on42 l.jpg
Depending on: Program Effect

2. Whether program/service/policy and/or system change includes an evaluation component at initiation

3. Whether adequate resources are available for evaluation

4. Whether it is ethical/possible to manipulate exposure to the intervention


Outcome evaluation strategies l.jpg
Outcome Evaluation Strategies Program Effect

  • Questions to consider:

    Is service/program/policy and/or intervention population based or individually based?


Outcome evaluation strategies44 l.jpg
Outcome Evaluation Strategies Program Effect

  • Population Based --e.g., Title V, Title X, Medicaid

    • from the point of view of evaluation, these programs/policies can be considered “universal” since all individuals of a certain eligibility status are entitled to receive services


Outcome evaluation strategies45 l.jpg
Outcome Evaluation Strategies Program Effect

  • Population Based: issues

    • With coverage aimed at entire population, who is the comparison or the unexposed group?

    • What are the differences between eligibles served and not served which may affect outcomes? Between eligibles and ineligibles?


Outcome evaluation strategies46 l.jpg
Outcome Evaluation Strategies Program Effect

  • Population Based: issues

    • How do we determine the extent of program exposure or coverage? (need population-based denominators and quality program data)


Outcome evaluation strategies47 l.jpg
Outcome Evaluation Strategies Program Effect

  • Population Based: issues

    • Which measures to use?

      • Measures are typically derived from population data sets e.g., Medicaid claims data, surveillance, vital records, census data


Outcome evaluation strategies48 l.jpg
Outcome Evaluation Strategies Program Effect

  • Individually Based

    • e.g., Aids/sex education program in two schools; smoking cessation program in two county clinics

  • Traditional evaluation strategies can be more readily used/designs are more straightforward


Outcome evaluation strategies49 l.jpg
Outcome Evaluation Strategies Program Effect

  • Questions to Consider:

    • Is the evaluation Prospective or Retrospective?

      • Retrospective design limits options for measurement and for selection of comparison groups

      • Prospective design requires evaluation resources committed up front


Outcome evaluation strategies50 l.jpg
Outcome Evaluation Strategies Program Effect

  • Questions to consider

    • Which design to choose?

      • Experimental, quasi-experimental, case-control, retrospective cohort?

    • What biases are inherent in one design versus another?

    • What are the trade-offs and costs?


Outcome evaluation strategies51 l.jpg
Outcome Evaluation Strategies Program Effect

  • Questions to consider

    • Measurement

      • What will be measured?

      • Who will be measured?

      • How?


Evaluation designs l.jpg
Evaluation Designs Program Effect

  • Experimental Designs

    • Clinical Trials in Epidemiology

  • Quasi-Experimental Designs

  • Observational Epidemiologic Designs


Study designs l.jpg
Study Designs Program Effect


Evaluation designs54 l.jpg
Evaluation Designs Program Effect

  • Experimental

    • Use of Random Assignment to obtain treatment/intervention and comparison groups

      • Allows the evaluator to select samples that are comparable within limits of sampling error


Evaluation designs55 l.jpg
Evaluation Designs Program Effect

  • Experimental

    • Use of Random Assignment

      • Potential confounders are theoretically equivalent between groups; however, with small numbers and wide variability on certain variables, sometimes non-equivalence occurs


Evaluation designs56 l.jpg
Evaluation Designs Program Effect

  • Experimental

    • Use of Random Assignment

      • Does not guarantee that initial comparability between groups will be maintained; need to test if attrition from groups is differential


Evaluation designs57 l.jpg
Evaluation Designs Program Effect

  • Experimental

    • Use of Random Assignment

      • Pretest-Posttest Control Group Design (Workhorse)

      • Posttest-Only Control Group Design

      • Solomon Four Group Design


Evaluation designs58 l.jpg
Evaluation Designs Program Effect

  • Experimental

    • Random Assignment Is Not Easily Achieved In The Real World


Evaluation designs59 l.jpg
Evaluation Designs Program Effect

  • How To Overcome Objections To Random Assignment:

    • Two new programs strategy

    • Delayed program strategy

    • Borderline strategy:

    • When lottery is already expected for selection

    • When demand outstrips supply; use of waiting lists


Evaluation designs60 l.jpg
Evaluation Designs Program Effect

  • Alternatives To Random Assignment

    • Non-Experimental Designs


Evaluation designs61 l.jpg
Evaluation Designs Program Effect

  • One Group Posttest Only Design (one shot case study)

  • One Group Pretest Posttest Only Design

  • Posttest Only Design with Non-equivalent groups (static group comparison)


Evaluation designs62 l.jpg
Evaluation Designs Program Effect

  • Alternatives To Random Assignment

    • Quasi-Experimental

    • Using non-equivalent comparison groups (but exposure is “manipulated”)

      • Note: Often constructed after the fact/ manipulation of exposure is retrospective


Evaluation designs63 l.jpg
Evaluation Designs Program Effect

  • Quasi-Experimental

    • Nonequivalent control group design with pretest and posttest (plus modifications on this design)

    • Time Series

    • Multiple Time Series

    • Institutional Cycle Design


Evaluation designs64 l.jpg
Evaluation Designs Program Effect

Observational Epidemiologic Designs:

Historically, these designs were reserved exclusively for examining risk factors (observed exposures) and outcomes


Evaluation designs65 l.jpg
Evaluation Designs Program Effect

  • Epidemiologic Designs

    • Case-Control

      • Sampling from outcome (disease)

    • Cohort

      • Sampling from exposure


Evaluation designs66 l.jpg
Evaluation Designs Program Effect

  • Epidemiologic Designs

    • Cross-Sectional

      • Sampling from the entire population (same point in time)

    • Ecologic

      • Exposure and outcome cannot be linked at the individual level


Evaluation designs67 l.jpg
Evaluation Designs Program Effect

  • When using epidemiologic designs for program evaluation, need to reconsider what distinguishes these designs from experimental and quasi-experimental designs


Evaluation designs68 l.jpg
Evaluation Designs Program Effect

Manipulation of Exposure

  • When using observational epidemiologic designs for program evaluation, we de facto accept that there has been manipulation of exposure (the program/policy)


Evaluation designs69 l.jpg
Evaluation Designs Program Effect

Epidemiologic studies involve a measure of exposure at one point in time and a measure of outcome at one point in time


Evaluation designs70 l.jpg
Evaluation Designs Program Effect

Experimental and quasi-experimental studies can potentially include three measures:

  • a measure of exposure to the intervention at one point in time

  • measure of the outcome (at two points in time, pre and posttest)


Evaluation designs71 l.jpg
Evaluation Designs Program Effect

  • Experimental/quasi-experimental designs ask:

    • Does the intervention change the outcome of the individuals being studied?


Evaluation designs72 l.jpg
Evaluation Designs Program Effect

  • Epidemiologic designs ask:

    • Does the intervention have an impact on whether or not an outcome occurs?


Evaluation designs73 l.jpg
Evaluation Designs Program Effect

Example #1

  • WIC food purchasing workshop (outcome is food purchase choices) versus WIC breastfeeding workshop for first time pregnant women (outcome is breastfeeding in this group of women)


Evaluation designs74 l.jpg
Evaluation Designs Program Effect

Example #2

  • Impact of a teenage pregnancy prevention program on knowledge, attitudes and behavior versus on pregnancy rates


Evaluation designs75 l.jpg
Evaluation Designs Program Effect

When studying a health status outcome, we can use community data or data from another group of individuals as a type of baseline


Evaluation designs76 l.jpg
Evaluation Designs Program Effect

  • e.g., can compare the LBW rates in the population pre and post introduction of case-management, or can compare the lbw rates in two communities


Evaluation designs77 l.jpg
Evaluation Designs Program Effect

Summary

Epidemiology designs are used when baseline/pretest measures are not theoretically possible


Evaluation designs78 l.jpg
Evaluation Designs Program Effect

However, methods to adjust for possible differences between groups on factors other than the outcome make epidemiologic observational designs very robust


Evaluation designs79 l.jpg
Evaluation Designs Program Effect

Summary

  • Most designs in the real world are hybrids

  • The major goal is to construct a design which is free of bias


Comparison group selection l.jpg
Comparison Group Selection Program Effect

  • Individual Based Program

    • Individuals who drop out of treatment or program

    • Individuals getting standard program (usual care)


Comparison group selection81 l.jpg
Comparison Group Selection Program Effect

  • Individual Based Program

    • Individuals attending classes in another school, attending a clinic or other program in another health department or community agency

    • Individuals on waiting lists for the program


Comparison group selection82 l.jpg
Comparison Group Selection Program Effect

Population Based Program/Policy

  • More difficult to find comparison group if entire eligible population is covered

  • If possible, find those who are eligible but did not access program, although selection bias likely to be operative


Comparison group selection83 l.jpg
Comparison Group Selection Program Effect

Population Based Program/Policy

  • Can examine outcomes for the population pre and post program/policy implementation; at both time points can compare to another population

    • e.g., the uninsured, those on Medicaid


Comparison group selection84 l.jpg
Comparison Group Selection Program Effect

Population Based Program/Policy

  • Compare two risk ratios

  • Can use a priori hypotheses about how these should change after the intervention


Measurement failure l.jpg
Measurement Failure Program Effect

  • Flawed evaluation design

  • Poor validity and reliability of measurement

  • Biased/unreliable data collection procedures (including sampling)


Summary evaluation designs l.jpg
Summary- Evaluation Designs Program Effect

Evaluations most often benefit from a multi-method and multi-source data approach


Who uses program evaluation l.jpg
Who Uses Program Evaluation? Program Effect

  • Program Managers/Administrators

  • Program Planners

  • Program Funders (Private, Public)

  • Legislators and Other Policy-makers


To what end l.jpg
To What End? Program Effect

  • To develop new programs/policies

  • To refine programs/policies

  • To terminate programs/policies

  • To conduct policy analysis


Summary l.jpg
Summary Program Effect

However, the ability to translate evaluation findings into good programs and policy does not only depend on quality data, but on political will