1 / 29

What Works? Evaluating the Impact of Active Labor Market Policies

What Works? Evaluating the Impact of Active Labor Market Policies. May 2010, Budapest, Hungary Joost de Laat (PhD), Economist, Human Development. Outline Why Evidence Based Decision Making? Active Labor Market Policies: Summary of Findings

aminia
Download Presentation

What Works? Evaluating the Impact of Active Labor Market Policies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What Works? Evaluating the Impact of Active Labor Market Policies May 2010, Budapest, Hungary Joost de Laat (PhD), Economist, Human Development

  2. Outline Why Evidence Based Decision Making? Active Labor Market Policies: Summary of Findings Where is the Evidence? The Challenge of Evaluating Program Impact Ex Ante and Ex Post Evaluation

  3. Why Evidence Based Decision Making? • Limited resources to address needs • Multiple policy options to address needs • Rigorous evidence often lacking to prioritize policy options and program elements

  4. Active Labor Market Policies: Getting Unemployed into Jobs Improve matching of workers and jobs Assist in job search Improve quality of labor supply Business training, vocational training Provide direct labor incentives Job creation schemes such as public works

  5. Active Labor Market Policies

  6. International Evidence on Effectiveness of ALMPs Active Labor Market Policy Evaluations: A Meta Analysis. By David Card, Jochen Kluve, and Andrea Weber (2009) Review of 97 studies between 1995-2007 The Effectiveness of European Active Labor Market Policy. By Jochen Kluve (2006) Review of 73 studies between 2002-2005

  7. Do ALMPs Help Unemployed Find Work? (Card et al. (2009), Kluve (2006)) Subsidized public sector employment Relatively Ineffective Job search assistance (often least expensive) Generally favorable, especially in short run Combined with sanctions (e.g. UK “New Deal”) promising Classroom and on-the-job training Not especially favorable in short-run More positive impacts after 2 years

  8. Do ALMPs Help Unemployed Find Work? (Card et al. (2009), Kluve (2006)) ALMPs targeted at youth Findings mixed

  9. The Impact Evaluation Challenge Impact is difference in outcome withand withoutprogram for those beneficiaries who participate in the program Problem: beneficiaries have only one existence; they participate in the program or they do not.

  10. $2000 Program Impact = $1000 extra income? $1000 before after Skills Training Impact Evaluation Challenge: before – after comparison ok? Income for beneficiary increases from $1000 to $2000 after training

  11. $2000 NO! Program Impact = $500 $1500 $1000 before after NO Skills Training Impact Evaluation Challenge: before – after often incorrect Income for the same person but without training would have increased from $1000 to $1500 because of improving economy

  12. Impact Evaluation Challenge • Solution: a proper comparison group • Comparison outcomes must be identical to treatment group outcomes, if the treatment group did not participate in the program.

  13. Impact Evaluation Approaches • Ex ante: • Randomized evaluations • Double-difference (DD) methods • Ex post: • 3. Propensity score matching (PSM) • 4. Regression discontinuity (RD) design • 5. Instrumental variable (IV) methods 13

  14. Program Impact = $500 $2000 $1500 $1000 before after Skills Training Random assignment Income treatment group is $2000 Income comparison group is $1500

  15. Randomized Assignment Ensures Proper Comparison Group • Ensures treatment and comparison at start of program are the same (background and outcomes) • Any differences that arise after program must be due to the program and not due to selection-bias • “Gold” standard for evaluations; not always feasible

  16. Examples Randomized ALMP Evaluations Improve matching of workers and jobs Counseling the unemployed in France Improve quality of labor supply Providing vocationally focused training for disadvantaged youth in USA (Job Corps) Provide direct labor demand / supply incentives Canadian Self-Sufficiency Project 16

  17. Challenges to Randomized Designs • Cost • Ethical concerns: withholding a potentially beneficial program may be unethical • Ethical concern must be balanced with: • programs cannot reach all beneficiaries (and randomization may be fairest) • knowing the program impact may have large potential benefits for society … 17

  18. Societal Benefits • Rigorous findings lead to scale-up: • Various US ALMP programs – funding by US Congresscontingent on positive IE findings • Opportunidades (PROGRESA) – Mexico • Primary school deworming – Kenya • Balsakhi remedial education – India 18

  19. Ongoing (Randomized) Impact Evaluations: From MIT Poverty Action Lab Website (2009)

  20. World Bank’s Development Impact Evaluation Initiative (DIME) 12 Impact Evaluation Clusters: Conditional Cash Transfers Early Childhood Development Education Service Delivery HIV/AIDS Treatment and Prevention Local Development Malaria Control Pay-for-Performance in Health Rural Roads Rural Electrification Urban Upgrading ALMP and Youth Employment

  21. Other Evaluation Approaches • Ex ante: • Randomized evaluations • Double-difference (DD) methods • Ex post: • 3. Propensity score matching (PSM) • 4. Regression discontinuity (RD) design • 5. Instrumental variable (IV) methods 21

  22. Non-Randomized Impact Evaluations “Quasi-experimental methods” • Comparison group constructed by evaluator • Challenge: evaluator can never be sure if behaviour of comparison group mimics that of treatment group without program: selection bias

  23. Example: Suppose Only Very Motivated Underemployed Seek Extra Skills Training • Data on (very motivated) under-employed individuals who participated in skills training. • Construct comparison group from (less motivated) under-employed who did not participate in skills training. • DD method: evaluator compares increase in average incomes between two groups 23

  24. Double-Difference (DD) Method Treatment group Program impact (positive bias) Comparison group (non-randomization) 24

  25. Non-experimental design • May provide unbiased impact answer • Relies on assumptions regarding comparison • Usually impossible to verify assumptions • Bias always smaller if evaluator has detailed background variables (covariates) 25

  26. Assessing Validity of Non-Randomized Impact Evaluations • Verify pre-program characteristics are same between treatment and comparison • Test ‘impact’ of program on outcome variable that should not be affected by the program • Note: will always hold in properly designed randomized evaluations

  27. Conclusion • Everything else equal, experimental designs are preferred. Assess case-by-case. • Most appropriate when: • New program in pilot phase • Not in pilot phase but receives large amounts of resources and its impact is questioned • Non-experimental evaluations often cheaper; interpretation of results requires more scrutiny 27

  28. THANK YOU! 28

  29. Impact Evaluation Resources World Bank (2010) “Handbook of Impact Evaluations” by Khandker et al. www.worldbank.org/sief www.worldbank.org/dime www.worldbank.org/impactevaluation www.worldbank.org/eca/impactevaluation (last site coming soon) http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evaluation_en.htm www.povertyactionlab.org http://evidencebasedprograms.org/

More Related