1 / 7

Methodological Problems Faced in Evaluating Welfare-to-Work Programmes

Methodological Problems Faced in Evaluating Welfare-to-Work Programmes. Alex Bryson Policy Studies Institute Launch of the ESRC National Centre for Research Methods, St. Anne’s College, Oxford University, 20 th June 2005. The usual suspects : questions posed in W-to-W Evaluation.

lelmer
Download Presentation

Methodological Problems Faced in Evaluating Welfare-to-Work Programmes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Methodological Problems Faced in Evaluating Welfare-to-Work Programmes Alex BrysonPolicy Studies Institute Launch of the ESRC National Centre for Research Methods, St. Anne’s College, Oxford University, 20th June 2005

  2. The usual suspects: questions posed in W-to-W Evaluation • Does ‘it’ work? • Causal identification • with/without experiments • With cross-sectional or longitudinal data • How are we going to make it run according to plan? • Operational assessment • How did people feel about it?

  3. Harder Questions Not Always Posed • How much did it cost? • net benefits, opportunity costs? • Who did it work for? • Heterogeneous treatment effects • Why did it (not) work? • Did it disadvantage others? • Under what conditions will we see the same results? • How is it likely to work in the future? • Policy transfer (from pilot to national roll-out; across areas; across providers; interacting with other policies)

  4. Potential for misleading results because... 1. wrong methodology for the policy issue at hand, eg. TT instead of ATE when looking to extend a programme nationally 2. correct methodology but implemented poorly – either on the ground (eg. securing RA) or poor data (eg. early on in EMA) 3. Impacts shift in the longer run. Few studies addressing long-term impacts but clear that they are often very different from shorter-term impacts – often reversing earlier results. Eg. GAIN – work first fading relative to human capital investment, as you might expect. 4. When general equilibrium effects are a big deal – that is, when programme has big effects on non-participants, eg. where helping programme participants proves to be at the expense of other disadvantaged groups beyond the programme. Depends on the size of the programme, how effective it is in benefiting participants, and the extent to which participants are capable of taking the jobs of non-participants. 5. Blind empiricism: trusting to the numbers too much. We need THEORY as well as numbers.

  5. The big issue…. • Evaluations can never ‘prove’ certain policies work or don’t work because pilots can NEVER give a once and for all answer. Effects differ: • With programme ageing • Size/composition of entry cohorts • Changes in external environment, eg. business cycle • Interactions with other policies • Therefore always ask the big questions: • A priori, why do we think policies are going to have a particular impact? • What happens when similar policies evaluated at different points in time, or across regions/countries, produce different results? • How can we learn from these differences? How can we understand what generates them?

  6. Practical steps... 1. Increasing evaluator knowledge of evaluation processes and mechanisms - particularly important in understanding which treatment parameter is appropriate for the policy at hand • Getting government and evaluators to understand what data and practical measures are needed in advance to secure the appropriate evaluation methodology • More triangulation with alternative techniques/methods addressing the same question - laboratory experiments - qualitative data - purpose built surveys alongside administrative data • Importance of replication studies to validate initial findings, understand bias in estimates, review impacts over time: • same data, same methods • same data, alternative methods • extensions to the data, same methods • extensions to the data, alternative methods 5. More meta-analyses

  7. A Footnote: What Does ‘it worked’ mean? 1. What works economically competes with what works politically - Clearly important signalling to electorate (selling welfare) is key. 2. Economic outcomes (poverty reduction, increasing employment/quality of employment) are largely uncontested. But these ‘goods’ - which are both private and public goods - come at a cost. 3. The issue here is: benefits to whom, and at what cost (a) to Exchequer (b) to others (substitution etc.) and (c) what are opportunity costs of not spending money on other potential interventions? 4. Finally, it is inherently more difficult to get at distributional outcomes than it is to get at mean outcomes (a point worth mentioning for a government interested in the distribution of outcomes).

More Related