Program evaluation
Download
1 / 33

Program Evaluation - PowerPoint PPT Presentation


  • 62 Views
  • Uploaded on

Program Evaluation. Evaluation Defined. Green and Kreuter ( 1999) broad definition: “ comparison of an object of interest against a standard of acceptability ” Weiss (1998) more targeted:

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Program Evaluation' - hoang


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Evaluation defined
Evaluation Defined

  • Green and Kreuter (1999) broad definition:

    “comparison of an object of interest against a standard of acceptability”

  • Weiss (1998) more targeted:

    “systematic assessment of the operation and/or the outcomes of a program or a policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy”.


Program evaluation

  • Fournier (2005)

    “Evaluation is an applied inquiry process for collecting and synthesizing evidence that culminates in conclusions about the state of affairs, value, merit, worth, significance, or quality of a program, product, person, policy, proposal, or plan.”


Program evaluation1
Program evaluation

  • A tool for using science as a basis for :

    • Ensuring programs are rational and evidence-based

      • Needs assessed

      • Theory-driven

      • Research-based

    • Ensuring programs are outcome-oriented

      • Forces goals and objectives at the outset

    • Ascertaining whether goals and objectives are being achieved

      • Performance measures established at outset


Program evaluation2
Program evaluation

  • A tool for using science as a basis for :

    • Informing program management about

      • Program processes – adjusted, improved

      • Program quality – effectiveness (see goals and objectives)

      • Program relevance

    • Decision-making and action

      • e.g. policy development based on program evaluations

    • Transparency and accountability

      • Funders, participants, and other stakeholders.


Program evaluation3
Program evaluation

  • Not done consistently in programs

  • Often not well-integrated into the day-to-day management of most programs


Program evaluation

FROM Logic Model presentation:

The accountability era

  • What gets measured gets done

  • If you don’t measure results, you can’t tell success from failure

  • If you can’t see success, you can’t reward it

  • If you can’t reward success, you’re probably rewarding failure

  • If you can’t see success, you can’t learn from it

  • If you can’t recognize failure, you can’t correct it.

  • If you can demonstrate results, you can win public support.

    Re-inventing government, Osborne and Gaebler, 1992

    Source: University of Wisconsin- Extension-Cooperative Extension


Within an organization evaluation
Within an organization – evaluation...

  • Should be designed at the time of program planning

  • Should be a part of the ongoing service design and policy decisions

    • Evidence that actions conform with strategic directions, community needs, etc

    • Evidence that money spent wisely

  • Framework should include components that are consistent across programs

    • In addition to indicators and methods tailor-made for specific programs and contexts

  • Extent of evaluation

    • related to the original goals

    • related to complexity of the program


When not to evaluate patton 1997
When not to evaluate (Patton, 1997)

  • There are no questions about the program

  • Program has no clear direction

  • Stakeholders can’t agree on program objectives

  • Insufficient funds to evaluate properly


Merit and worth
Merit and Worth

  • Evaluation looks at the merit and worth of an evaluand (the project, program, or other entity being evaluated)

  • Merit is the absolute or relative quality of something, either intrinsically or in regard to a particular criterion

  • Worth is an outcome of an evaluation and refers to the evaluand’svalue in a particular context. This is more extrinsic.

  • Worth and merit are not dependent on each other.


Merit and worth1
Merit and Worth

  • A medication review program has merit if it is proven to reduce known risk for falls

    • It also has value/worth if it saves the health system money

  • An older driver safety program has merit if it is shown to increase confidence among drivers over 80 years of age

    • Its value is minimal if it results in more unsafe drivers on the road and increases risk and cost to community at large.


Evaluation vs research
Evaluation vs research

  • In evaluation, politics and science are inherently intertwined.

    • Evaluations are conducted on the merit and worth of programs in the public domain

      • which are themselves responses to prioritized needs that resulted in political decisions

    • Program evaluation is intertwined with political power and decision making about societal priorities and directions (Greene, 2000, p. 982).


Formative evaluation
Formative evaluation

Purpose: ensure a successful program. Includes:

  • Developmental Evaluation (pre-program)

    • Needs Assessment – match needs with appropriate focus and resources

    • Program Theory Evaluation / Evaluability Assessment – clarity on theory of action, measurability, against what criteria

      • Logic Model – ensures aims, processes and evaluations linked logically

    • Community/organization readiness

    • Identification of intended users and their needs

    • etc


Surveillance planning and evaluating for policy and action precede proceed model

Phase 5

Administrative &

policy assessment

Phase 4

Educational &

ecological

assessment

Phase 2

Epidemiological

assessment

Phase 1

Social

assessment

Phase 3

Behavioral &

environmental

assessment

Predisposing

Public

Health

Health

education

Behavior

Reinforcing

Quality of

life

Health

Policy

regulation

organization

Environment

Enabling

Surveillance, Planning and Evaluating for Policy and Action: PRECEDE-PROCEED MODEL*

Phase 6

Implementation

Phase 7

Process evaluation

Phase 8

Impact evaluation

Phase 9

Outcome evaluation

Input

Process

Short-term

social impact

Output

Short-term

impact

Longer-term

health outcome

Long-term

social impact

*Green & Kreuter, Health Promotion Planning, 4thed, 2005.


Formative evaluation1
Formative evaluation

Purpose: ensure a successful program

  • Process Evaluation– all activities that evaluate program once running

    • Program Monitoring

      • Implemented as designed or analysing/understanding why not

      • Efficient operations

      • Meeting performance targets (Outputs in logic model)


Summative evaluation
Summative evaluation

Purpose: determine program success in many different dimensions

Also called-

  • Effectiveness evaluation

  • Outcome/Impact evaluation

  • Examples

    • Policy evaluation

    • Replicability/exportability/transferability evaluation

    • Sustainability evaluation

    • Cost-effectiveness evaluation


Evaluation science
Evaluation Science

  • Social research methods

  • Match research methods to the particular evaluation questions

    • and specific situation

  • Quantitative data collection involves:

    • identifying which variables to measure

    • choosing or devising appropriate research instruments

      • reliable and valid

    • administering the instruments in accordance with general methodological guidelines.


Experimental design in evaluation
Experimental Design in Evaluation

  • Randomized controlled trial (RCT)

    • Robust science, internal validity

    • Pre/post-test with equivalent group

      R O1 XO2

      R O1 O2

    • Post-test only with equivalent group

      R XO2

      R O2

      Problems with natural settings:

      • Randomization

      • Ethics

      • Implementation not controlled (staff, situation)

      • Participant demands

      • Perceived inequity between groups

      • etc


Experimental design in evaluation1
Experimental Design in Evaluation

  • Quasi-experimental design

    • Randomization not possible:

      • Ethics

      • Program underway

      • No reasonable control group

    • One group post-test XO2

      • Weakest design so use for exploratory, descriptive

      • Case study. Not for attribution.

    • One group pretest-posttest O1XO2

      • Can measure change

      • Can’t attribute to program

    • Pre-post non-equivalent (non-random) groups – good but must

      • Construct similar comparators by (propensity) matching individuals or group

        N O1 XO2

        N O1 O2


Evaluation methods clarke and dawson 1999
Evaluation Methods (Clarke and Dawson, 1999)

  • Strict adherence to a method deemed to be ‘strong’ may result in the wrong problems becoming the focus of the evaluation

    • purely because the right problems are not amenable to analysis by the preferred method

  • Rarely only one method used

    • Require range to ensure depth and detail from which conclusions can be drawn


Experimental design in evaluation2
Experimental Design in Evaluation

  • Criticism of experimental design in evaluation

    • Program is a black box

      • ED measures causality (Positivist)

      • Does not capture the nature of causality (Realist)

    • Internal dynamics of program not observed

      • How does the program work?

        • Theory helps explain

      • What are the characteristics of those in the program?

        • Participants need to choose to make a program work

        • Right conditions are needed to make this possible

          Clark and Dawson, 1999

    • What are unintended outcomes/effects of the program?


Naturalistic inquiry qualitative design
Naturalistic Inquiry - Qualitative design

  • Quantitative (ED) offers little insight into the social processes which actually account for the changes observed

  • Can use naturalistic methods to supplement quantitative techniques (mixed methods)

  • Can use fully naturalistic paradigm

    • Less common


Naturalistic inquiry
Naturalistic Inquiry

  • Interpretive:

    • People mistake their own experiences for those of others. So….

    • Emphasis on understanding lived experiences of (intended) program recipients

  • Constructivist:

    • Knowledge is constructed (not discovered by detached scientific observation). So…

    • Program can only be understood within natural context

      • How being experienced by participants, staff, policy makers

      • Can’t construct evaluation design ahead of time “don’t know what you don’t know”

      • Theory is constructed from (grounded in) data


Evaluation data
Evaluation Data

  • Quantitative

  • Qualitative

  • Mixed

  • Primary

  • Secondary

  • One-off surveys, data pulls

  • Routine monitoring

  • Structured

  • Unstructured (open-ended)


Data collection for evaluation
Data Collection for Evaluation

  • Questionnaires

    • right targets

    • carefully constructed: capture the needed info, wording, length, appearance, etc

    • analysable

  • Interviews (structured, semi-, un-)

    • Individuals

    • Focus groups

      • Useful at planning, formative and summative stages of program


Data collection for evaluation1
Data Collection for Evaluation

  • Observation

    • Systematic

      • Explicit procedures, therefore replicable

      • Collect primary qualitative data

    • Can provide new insights

      • drawing attention to actions and behaviour normally taken for granted by those involved in program activities and therefore not commented upon in interviews

    • Circumstances in which it may not be possible to conduct interviews


Data collection for evaluation2
Data Collection for Evaluation

  • Documentary

    • Solicited e.g. journals/diaries

    • Unsolicited e.g. meeting minutes, emails, reports

    • Public e.g. organization’s reports, articles in newspaper/letters

    • Private e.g. emails, journals


Evaluation in logic models
Evaluation in Logic Models

  • Look at the Logic Model Template (next slide)

  • What types of evaluation do you see?

  • What methods are implied?

  • What data could be used?



Evaluation in business case
Evaluation in Business Case

  • Look at handout: Ontario Min of Agriculture, Food, and Rural Affairs(OMAFRA)

  • Where do you see evaluation?

  • What methods are implied?

  • What data could be used?