1 / 21

PPA 502 – Program Evaluation

PPA 502 – Program Evaluation. Lecture 3c – Strategies for Impact Assessment. Introduction.

dorjan
Download Presentation

PPA 502 – Program Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PPA 502 – Program Evaluation Lecture 3c – Strategies for Impact Assessment

  2. Introduction • The ultimate purpose of a social program is to ameliorate some social problem or improve some social condition. If the program theory is sound and the program plan well implemented, those social benefits are expected to follow. Rarely are those benefits assured, however. Practical and conceptual shortcomings combined with the intractable nature of many social problems all too easily undermine the effectiveness of social programs.

  3. Introduction • A general principle applies: The more rigorous the research design, the more plausible the resulting estimate of intervention effects. • The design of impact evaluations faces two competing pressures: • Evaluations should be undertaken with sufficient rigor that relatively firm conclusions can be reached. • Practical considerations of time, money, cooperation, and protection of participants limit the design options and methodological procedures that can be employed.

  4. Introduction • Evaluators assess the effects of social programs by: • Comparing information about outcomes for participants and nonparticipants, • Making repeated measurements on participants before and after intervention, • Or other methods that attempt to achieve the equivalent of such comparisons. • The basic aim of impact assessment is to produce an estimate of the net effects of an intervention.

  5. Introduction • Impact assessment is relevant at many stages of the process. • Pilot demonstrations to estimate whether a proposed program will work. • Program design to test the most effective ways to develop and integrate the various program elements. • Program initiation to test the efficacy of the program at a limited number of sites. • Program modification to test the effects of the changes. • Program continuation to test for sunset legislation, funding renewal, or program defense.

  6. Key Concepts in Impact Assessment • The experimental model. • The optimal way to assess impact is a randomized field experiment. • Random assignment • Treatment and control groups • Net outcome assessment. • Prerequisites for assessing impact. • Program’s objectives must be well-articulated to make it possible to specify credible measures of the expected outcomes. • The interventions must be sufficiently well-implemented that there is no question that critical elements have been delivered to appropriate targets.

  7. Key Concepts in Impact Assessment • Linking interventions to outcomes. • Establishing impact essentially amounts to establishing causality. • Most causal relationships in social science expressed as probabilities. • Conditions limited causality • External conditions and causes. • Biased selection. • Other social programs with similar targets.

  8. Key Concepts in Impact Assessment • “Perfect” versus “good enough” impact assessments. • Intervention and target may not allow perfect design. • Time and resource constraints. • Importance often determines rigor. • Review design options to determine most appropriate.

  9. Key Concepts in Impact Assessment • Gross versus net outcomes.

  10. Extraneous Confounding Factors • Uncontrolled selection. • Preexisting differences between treatment and control groups. • Self-selection. • Program location and access. • Deselection processes (attrition bias). • Endogenous change. • Secular drift. • Interfering events. • Maturational trends.

  11. Design Effects • Stochastic effects. • Significance (Type I error). • Power (Type II error). • The key is finding the proper balance between the two. • Measurement reliability. • Does the measure produce the same results repeatedly? • Unreliability dilutes and obscures real differences. • Reproducibility should not fall below 75 or 80%.

  12. Design effects • Measurement validity. • Does the instrument measure what it is intended to measure? • Criteria. • Consistency with usage. • Consistency with alternative measures. • Internal consistency. • Consequential predictability.

  13. Design Effects • Choice of outcome measures. • A critical measurement problem in evaluations is that of selecting the best measures for assessing outcomes. • Conceptualization. • Reliability. • Feasibility. • Proxy. • The Hawthorne Effect. • Delivery affected by context.

  14. Design Effects • Missing information. • Missing information is generally not randomly distributed. • Often must be supplemented by alternative survey items, unobtrusive measures, or estimates.

  15. Design Effects • Sample design effects. • Must select an unbiased sample of the universe of interest. • Select a relevant sensible universe. • Design a means of selecting an unbiased sample (random). • Implement sample design with fidelity. • Minimizing design effects. • Planning. • Pretesting. • Sampling.

  16. Design Strategies for Isolating the Effects of Extraneous Factors • Randomized controls. • Regression-discontinuity controls. • Matched construct controls. • Statistically equated controls. • Reflexive controls. • Repeated measures reflexive controls. • Time-series reflexive controls. • Generic controls.

  17. Design Strategies for Isolating the Effects of Extraneous Factors • Full- versus partial-coverage programs. • Full coverage means absence of control group. Must use reflexive controls.

  18. A Catalog of Impact Assessment Designs

  19. Judgmental Approaches to Impact Assessment • Connoisseurial impact assessments. • Administrator impact assessments. • Participant’s assessments. • The use of judgmental assessments. • Limited funds. • No preintervention measures. • Full-coverage, uniform programs.

  20. Inference Validity Issues in Impact Assessment • Reproducibility (can a researcher using the same design in the same setting achieve the same results?). • Power of design. • Fidelity of implementation. • Appropriateness of statistical models.

  21. Inference Validity Issues in Impact Assessment • Generalizability (the applicability of the findings to similar situations that were not studied.). • Unbiased sample. • Faithful representation of actual program. • Common settings. • Stress reproducibility first through several iterations, then focus on generalizability. • Pooling evaluations: meta-analysis.

More Related