1 / 25

EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES. ? An ongoing challenge!. Overview. Background Definitions General trends in Evaluation and IA debates Governance M&E and Impact Assessment (IA) Literature BINGO practice Conclusions and implications for CARE

Download Presentation

EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES ? An ongoing challenge!

  2. Overview • Background • Definitions • General trends in Evaluation and IA debates • Governance M&E and Impact Assessment (IA) • Literature • BINGO practice • Conclusions and implications for CARE • Buzz group discussions and questions

  3. Definitions? • What is the difference between impact assessment and evaluation? • Is it about the nature of the change? • The scale of change ? • Timing of the exercise? • Accountability reporting versus learning? • Difference between methodological approaches and data collection methods

  4. General context and debates • Political pressure to demonstrate results and VFM • But confusion about if/how impact should be defined and measured • Old tensions and power issues alive: • Between upward accountability for donors and downward accountability, learning and empowerment • Attribution versus contribution

  5. Debates becoming more nuanced • No longer simply about • Quantitative versus qualitative • Subjective versus objective • Economists versus the rest • More incentive to go deeper: • What are the purposes of different evaluation and IA approaches? • What are their strengths and weaknesses? • What are the differences? • Which elements are compatible and which are not? Source adapted from Chambers 2008 by Holland

  6. Random control trials • Good for attributing impact…but: • Not good for empowering or downward accountability • Broader ethical issues • A-theoretical: look at isolated variables so not good at explaining how and why change happens • Not good for generalisation • Ignore spill over effects

  7. Theory of change approach • More theoretical • Better for learning: testing assumptions identifying/exploring how/why change happened in efforts to identify ‘good practice’ • Some argue if combined with RCTs could enable accountability and learning (Lucas and Longhurst 2010) • But may not be very effective in more complex systems

  8. Realistic evaluation approach • More concerned with learning than accountability • Less confident in a priori theories of change • More interested in identifying what change is happening/has happened and why to identify and enhance positive impacts for particular groups as a basis for future programme theory (Lucas and Longhurst 2010)

  9. Developmental evaluation (informed by complexity/systems thinking) • Similar to realistic but more emphasis on learning • Complexity lens offers additional value as forces consideration of: • Which elements of a programme might suit a TOC approach • The extent to which it is possible to identify ‘best’ or ‘good practice’ • Systems rather than isolated variables • Unpredictable change

  10. Purpose and scope of traditional vs. complexity-oriented evaluations Source Rangalingham 2008

  11. Implications of complexity thinking What is planned • Evaluate from perspective of multiple levels of interconnected systems, study feedback between the organisation and its environment, look for emergent rather than planned change • Dynamics and nature of change: Look for non-linearity, anticipate surprises and unexpected outcomes, analyse the system dynamics over time”, look for changes in conditions that facilitate systemic change, and how well matched programme is to the wider system • People, motivations and relationships: Study patterns of incentives and interactions among agents, study (e)quality of relationships, study individuals and informal / shadow coalitions, vs. formal organisation, etc What actually happens Source adapted from Rangalingham 2008

  12. Possible areas of difference • Values • Ideas about knowledge and evidence – positivist versus interpretive • Ideas about how change happens: systems/complexity thinkers versus reductive/ Neo-Newtonian linear views • etc

  13. New areas for ‘compatibility’? • Participatory statistics as now shown to enable: • Standardisation and commensurability: In Malawi and Uganda to evaluate outcomes and assess impact using rigorous representative samples • Scale: measuring empowerment in Bangladesh • Aggregation: Several examples of successful aggregation from focus groups • Representativeness: above use sampling techniques deemed rigorous and at least as reliable than traditional alternatives Source from Dee Jupp accessed in Holland forthcoming

  14. Coming soon……. • http://www.bigpushforward.net/

  15. Governance M&E and IA literature • Mirror debates in general literature e.g attribution versus contribution • Much discussion related to international governance indicators (Possible source of secondary data)

  16. Governance M&E and IA literature • Raises serious questions about assumptions and theories of change • Do objective du jure policy decisions lead to de facto outcomes? • Does citizens voice leads to increased government responsiveness and accountability? • Does more transparency lead to enhanced accountability? • Do democratic outcomes lead to development outcomes?

  17. More nuanced debates • Political issues: • Is governance impact just about achieving MDGs? • Is it also about political and civil rights and freedom? • Methodological: • How should governance impact/ outcomes be defined? • Who should be responsible for changes in complex programme systems? • Attribution impossible - most likely association as good as it gets • Subjective/objective false distinction; perception based data very acceptable: but whose perceptions? • Participatory numbers challenge: quantitative/qualitative distinction • So why lack of documented examples of use of participatory methods? • Need to use approaches that enhance learning and enable better understanding of governance change process • Implications: pluralist approaches

  18. Pluralist approaches • TOC – e.g. ODI Jones 2010 – for advocacy • Inductive approaches to producing quantitative data from case studies (Gaventa and Barrett 2010) • IETA report by McGee and Gaventa 2010 outlines pros and cons of several approaches and methods. They are not mutually exclusive –. many of the methods could be used within TOC, realistic or development approaches. They include: • Quant surveys or use of secondary data • Qualitative surveys • Outcome mapping • Most significant change • Critical stories of change • Participatory approaches • Sensemaker (quantifying data from story telling)

  19. Numbers: a note of caution • “WGIs - the best-known and most widely cited indicators of the quality of governance - are highly attractive to elite groups yet almost useless, if not actively misleading, for lay decision-makers. For good reasons their legitimacy is likely to be highly contested” (Pollitt, 2009).

  20. BINGO trends: E and IA a source of tension Increased competition ...causing problems for M&E Growing need for high profile fundraising and advocacy work Increased pressure to show results and impact Poor learning and accountability Lack of professional norms and standards Source: Rangalingham 2008

  21. BINGO trends: M&E and IA practice • Efforts to improve M&E practice using pluralist approaches • Move away from evaluation being about reporting to learning and changing attitudes and behaviour • Move away from attribution to contribution – Christian Aid’s leverage • Focus on outcomes rather than impact • Interest in looking at change through a power lens • Reluctance to perform meaningless aggregation • Frustration voice and accountability outcomes are relegated to output level

  22. BINGO trends: common challenges •  Fuzzy boundaries: • the nature of governance domains of change • levels of change - outputs/outcomes/impacts etc • Confusion can lead to subjective classifications • Even elements assumed ‘simple’ in Northern offices are challenging in practice • Host of problems because indicators too vague or poorly specified

  23. Implications? • Emerging debates relevant – challenges much broader than choosing indicators • Need to ensure approaches chosen fit with values and understandings of ‘truth’ ‘evidence’ and assumptions about how change happens • Could a complexity lens help? • Could QPM prove VFM methods within developmental approach e.g.. • ‘Measuring Empowerment Index’ for changes in citizen empowerment • Community scorecards for measuring changes in state effectiveness

  24. Implications of Ubora meta-level indicators for CO evaluation & IA? • What are the conceptual links between definitions of outcomes, impacts, related indicators and baselines in time bound donor funded projects and longer term programmes? • How can approaches to evaluation in short term projects help to build a convincing case for contribution to longer term programme change? • What secondary data is available to indicate governance related changes for your impact populations? Is it produced through approaches that are compatible with your values and understandings of change? • Do indicators selected fit with UNAIDs criteria for good indicators?

  25. Criteria for a good indicator: • Is it needed and meaningful? • Does it track significant change? • Has it been tested/ will it work? • Is it feasible: are resources available to collect data & analyse? • Is it consistent with & does it add meaning to overall indicator sets? • If quantitative –is it defined in metrics consistent with overall system • Is it fully specified: • Clear purpose and rationale? • Qualitative as well as quantitative aspects well defined? • Identified with clear methods to collect data? • Specifies how frequently data should be collected? • Disaggregated as appropriate? • Includes guidelines on how to interpret? Source adapted from UNAIDs 2008 in Holland and Thirkell

More Related