1 / 28

Thinking about Impact Assessment - from an International Development Cooperation standpoint

Thinking about Impact Assessment - from an International Development Cooperation standpoint. Professor Elliot Stern, Lancaster University UK Presentation to ALNAP 24 th Biannual Berlin December 2nd 2008. Thinking about Impact Assessment. The argument I want to make:

Download Presentation

Thinking about Impact Assessment - from an International Development Cooperation standpoint

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Thinking about Impact Assessment - from an International Development Cooperation standpoint Professor Elliot Stern, Lancaster University UK Presentation to ALNAP 24th Biannual Berlin December 2nd 2008

  2. Thinking about Impact Assessment The argument I want to make: • ‘Outcomes’, ‘results’, ‘effects’ and ‘impacts’ are important • But we need to think clearly about why this is so; what methods are appropriate; and in what circumstance

  3. Thinking about Impact Assessment Evaluators have always been interested in ‘outcomes’, ‘results’, ‘effects’ and ‘impacts’ • The balance between ‘summative’ and ‘formative’, ‘process’ and ‘outcome’ evaluations has been argued about , and • There have been legitimate criticisms that too much attention given to process not outcomes

  4. Thinking about Impact Assessment There are often weaknesses in evaluation : The balance of evaluative effort can be skewed towards processes unconnected to outcomes Methods adopted make little effort to disentangle what works from what is spurious – from what is due to a particular intervention/initiative or to other causes Evaluators have been known only to be concerned for beneficiaries – ignoring those who have missed out Initial success is privileged not longer term results

  5. Thinking about Impact Assessment In development cooperation the OECD/DAC definition has emphasised duration in defining impacts; impacts are: ‘long term effects produced by a development intervention’ Not all have accepted this distinction even in this particular policy domain, thus EU defines impact as: ‘A general term used to describe the effects of an intervention on society …’

  6. Thinking about Impact Assessment There has been a general upsurge of interest in ‘experimental’, ‘scientific’ which have informed the discourse about impact across many domains. This has been linked to medical trials -Cochrane Collaboration and similar moves in human services – Campbell Collaboration – and reinforced by US legislation requiring evaluations to be ‘scientific’

  7. Thinking about Impact Assessment ‘Impact’ has in this context been given a narrower methods-led meaning, to paraphrase Mohr: • A comparison of what happens with what would have happened had the intervention not been implemented From this perspective - the one advocated by Howard White at 3ie and the GDC - impact has become identified with attribution and the counterfactual; and experimental methodologies associated with that understanding of science & research (see GDC Report: When Will We Ever Learn? Improving Lives through Impact Evaluation 2006)

  8. Thinking about Impact Assessment This is not the first time this ‘model’ has been advocated - it recurs. It is not generally accepted in the evaluation community as the only or superior approach – but it is important. The battles that have gone on in the NONIE group and elsewhere have forced some acceptance of a ‘mix of methods’, fit for purpose and circumstance. But we would be wise to continue to distinguish between this and other approaches to impact

  9. Thinking about Impact Assessment Why does attribution matter? Mainly because we need to disentangle what makes a difference, ‘what works’ in the jargon, from changes that have nothing to do with our efforts Not simply was this initiative successful? Also: Did this initiative/intervention make a difference that would not otherwise have happened? For example…………

  10. Thinking about Impact Assessment C Figure 1 C Figure 3 A B B A B(2) C Figure 2 A B(1) D

  11. Thinking about Impact Assessment • There are two complementary approaches to this problem: • Comparative methods, including before/after comparisons; quasi experiments; and full (randomised) experiments • ‘Theory-based’ methods including ‘Theories of Change’, causal modelling and ‘realist’ analyses We usually need a mix!

  12. Thinking about Impact Assessment There are 4 sets of considerations I would use when considering how to construct an approach to the evaluation of ‘impacts’: • The political agendas of the actors • Technical issues of what is possible • Arguments from the philosophy of science • Ethical considerations

  13. Thinking about Impact Assessment The narrow approach to impact does have political ‘drivers’ although they are very diverse and the alliances are sometimes strange. Advocates want obviously to ‘better meet social and economic needs’. But they also may want to: • Legitimate (or de-legitimate) institutional and policy goals • Simplify policies/find the silver bullet/reduce costs/risks • Reduce public expenditure – the ‘nothing works’ agenda, again……….. and on a smaller scale • Occupational politics – or careerism

  14. Thinking about Impact Assessment Technical considerations are well-rehearsed. They include: • Problems constructing and maintaining control groups • The practicalities of random allocation (central control, administrative capacity, resources) • The risks of ‘contamination’ • The tendency to reductionism – a focus on limited ‘outcomes of interest’ • The statistical power of measures (sample size) • Trade-offs between internal validity and external validity – & hence our ability to generalise

  15. Thinking about Impact Assessment I would want to distinguish between practical risks and logistical difficulties on the one hand and fitness for purpose: • Many objections to experimental & quasi-experimental methods- when they are appropriate - can be overcome with careful attention to procedures and protocols but sometimes difficulties are rooted in the ‘object’ and its context – as well as in methods

  16. Thinking about Impact Assessment We can compare three ‘scenarios’ • S1: Standardized interventions in identical settings with common beneficiaries • S2: Standardized interventions in diverse settings, possibly with diverse beneficiaries • S3: Customized interventions in diverse settings with diverse beneficiaries

  17. Thinking about Impact Assessment These scenarios necessarily pull for different methodologies: Scenario 1 is better adapted to experiments Scenario 2 is better adapted to quasi experiments & comparisons (contingent and realist) and combinations of methods Scenario 3 is better adapted to case studies or narrative/qualitative approaches that build plausible theories

  18. Thinking about Impact Assessment Experiments do tend to favour single ‘inputs’ and ‘outcomes’; that can be delivered in discrete packages (not embedded); and that have a relatively short implementation chain – in the sense both of time and complexity/ease of implementation; and where the intervention is repeated often (large n) They are best for projects that deliver a known service to large numbers of recipients; are possible for relatively simple programmes; & unsuited to complex multi-measure strategies/ policies.

  19. Thinking about Impact Assessment This is acknowledged even by protagonists of RCTs. For example Esther Duflo of the MIT Poverty Lab has noted: ‘…randomised evaluations are not suitable for all types of programmes. They are suitable for programmes that are targeted to individuals or communities, and where the objectives are well defined. For example, the efficacy of foreign aid disbursed as general budget support cannot be evaluated in this way.’

  20. Thinking about Impact Assessment There is however a danger that advocates of ‘narrower’ impact approaches will press to redefine policy measures so that they become ‘evaluable’ through their preferred methods. As Duflo went on to say: ‘It may be desirable, for efficiency or political reasons, to disburse some fraction of aid in this form [GBS], although it would be extremely costly to distribute all the foreign aid in the form of general budget support, precisely because it leaves no place for rigorous evaluation of projects.’ (Italics added)

  21. Thinking about Impact Assessment • In international development cooperation, there is a tendency for advocates of ‘impact’ approaches to also favour sectoral, targeted programmes (sometimes called ‘vertical’ interventions) rather than policies that seek to address wider issues of governance and institution-building – such as General Budget Support or the Paris Declaration - arguing that sectoral programmes are both likely to be more effective and are often cheaper to deliver – of timely relevance given MDG goals

  22. Thinking about Impact Assessment Philosophical objections to experiments (and randomisation in particular) go to the heart of hard-fought debates about causality in the social sciences. These are variously: • Epistemological – what we know and how • Ontological - the nature of knowledge • Methodological – the possibilities of data collection and analysis To pick up on a few examples of these debates …….

  23. Thinking about Impact Assessment • Newtonian science assumed that we can observe regularities or patterns of individual phenomena from the outside & this allows for consistent explanations – explanation can be derived empirically • Most contemporary understandings of causality are theory based & assume we cannot observe causal mechanisms – we need to open up the ‘black-box’ because a) causal mechanisms are often hidden and b) are often unstable – e.g. are context specific • Hence difficulty in finding Humean general laws!

  24. Thinking about Impact Assessment If we follow this line of arguments it is unlikely we will ever be able to consistently demonstrate what works even for relatively straightforward projects and programmes across all contexts and circumstances – evidence remains a matter of probability and estimation not certainty or truth

  25. Thinking about Impact Assessment Which is why there is a need for: • Multi-methods that can be triangulated • Theory based approaches – to understand mechanisms that cannot be fully observed • Distinguishing between causality & explanation • Recognising the limits of ‘proof’ and ‘certainty’ • Understanding and typologising contexts • Linking process evaluations with outcome/impacts so as to understand a) what is being implemented and b) what accounts for divergence/diversity

  26. Thinking about Impact Assessment The ethical difficulties that all applied research faces are also well documented: • Treating people as actors with agency and will rather than as passive objects • Denying an intervention from some if it is needed in order to achieve randomisation Although the latter can be an unfounded– it would be consistent with counterfactual logic to offer alternatives in terms of ‘service’ rather than something/nothing & the focus of experiments are often modes of delivery not the actual service

  27. Thinking about Impact Assessment To conclude: We do need to focus more on outcomes/effects/impacts Comparisons (including experiments) are important as are model/theory building We need to accept that as initiatives become more complex and multi-measure so certainty and predictability about what works will diminish We should neither be put-off nor seduced by the promises of experimentalists – they offer many things; but in a limited set of circumstances – as the wise ones among them admit!

  28. Thinking about Impact Assessment

More Related