190 likes | 204 Views
Explore the international development evaluation landscape, challenges, and strategies for improving aid effectiveness and efficiency through quality assessments. Learn from the AusAID experience and global evaluation parameters. Discover how quality reporting, performance management, and partnership dynamics influence aid outcomes. Gain valuable insights into evaluation utilization and enhancing program effectiveness.
E N D
Quality in Evaluation: the international development experience Sophie Davies, Manager Evaluations Support, Program Effectiveness and Performance Division
Presentation Outline • Context • Evaluation at AusAID • Improving evaluation utility and quality
The Aid Context Current aid program is $4.8 bn (or 0.35% GNI) 89% of Australia’s aid goes through AusAID Bi-partisan commitment to aid budget of 0.5% of GNI or $8 bn by 2015 Donor commitment to 0.7% of GNI never been fully realised
Where does aid go? 2011-2012 budget COUNTRIES (TOP 5) Indonesia (558.1m) Papua New Guinea (482.3m) Solomon Islands (261.6m) Afghanistan (165.1m) Vietnam (137.9m)
Reaching the MDGs MDGs: agreed targets to reduce poverty by 2015. Adopted by 189 nations and during UN Millennium Summit in September 2000. Australia re-stated its commitment in 2007
Domestic parameters – aid review • Independent review of aid effectiveness led by Sandy Hollway over the last 6 months. • Objective: • To examine the effectiveness and efficiency of the Australian aid program and make recommendations to improve its structure and delivery. • Results are being considered by Government. • Will be released towards end of June
Global evaluation parameters: OECD -DAC • International standards for evaluation • DAC Criteria used for quality reports & evaluation: • Relevance • Effectiveness • Efficiency • Impact • Sustainability • Plus AusAID criteria: gender equity, M&E, analysis/ learning
Performance Management & Evaluation Policy • Self-assessment quality reporting balanced by independent evaluation for ‘monitored’ activities • Quality reporting occurs: • Activity: at entry, during implementation and • Program: annual program review • Independent evaluation • At least once every 4 years (IPR) • At end of program within its last 6 months (ICR) • Policy reviewed every 2 years (most recently 2010)
Overarching principles • Clear Objectives: for all aid interventions • Transparency: default position is report publicly available • Contestability and Sound Evidence: Performance reporting subject to contestability; based on sound evidence • Whole of Government and Other Partnerships: seek input and consult with key partners • Aid Effectiveness: Paris Declaration principles, Accra • Efficiency: effort and resources invested proportional to value & context of program
Where Performance and Quality sits at AusAID • Programs: self-assessment; manage evaluations • P&Q network/ managers: • over 230 people • Some with dedicated technical support roles • Quality & Performance Systems section: • Policy and guidance; • Support to programs in applying these • Office for Development Effectiveness: • Quality checks, Annual Review of Development Effectiveness (ARDE) • 2-3 thematic/ country level evaluations per year
Purpose of the PMEP • Management • Improvements to future aid program • Informs program and budget decisions • Learning • What works, when, where and how • Helps to focus funding where it’s most effective, efficient and relevant • Accountability • To public, e.g. Annual Review of Development Effectiveness (ARDE) • To partner governments, communities, Whole of Govt, implementing partners
Improving quality: under ODE • 2006 meta-evaluation: found poor evaluation quality • Changes made: • Revised PMEP/ evaluation guidance based on DAC criteria • Introduction of technical review process • Set up of M&E panel of experts • 2009: PMEP policy and guidance moved from ODE to Operations & Policy (now Program Effectiveness & Performance)
Four reviews of evaluation quality 2011 • Driven by different purposes • Review of technical review process: • to improve evaluation processes • PMEP review • For policy reform • Meta-analysis of independent evaluations (ICRs) • Content review - for independent review of aid effectiveness • Meta-evaluation of education ICRs • For understanding across education sector
Reviews referred to underlying strengths • Good practice exists: • internal annual quality reflections are well utilised to monitor and improve program management • Evaluation report quality has improved • Growing performance culture built around Performance & Quality network • The M&E Panel is well utilised and has helped some programs to improve quality units
But Evaluation utilisation is poor • Reviews identified common issues around: • Focus on output over outcomes/ impact • Poor quality reports; narrow, variable interpretation of criteria • Weak underlying data from M&E systems • Low compliance, • Poor use of information, publication is lagging • Despite different audiences for each review, common message: Evaluation is being driven by accountability, not by management/ learning
What needs to change? • Judicious & strategic use of evaluations • Scope and depth match the evaluation purpose • Focus on results, devt contribution not just outputs • Management see benefit and utility in evaluation • Transparency is improved: • broader public understanding; • improved accountability to public, partners and communities
Shifting the balance: how do we do this? • For greater management utility • Link staff training with support • Improve current guidance (scope vs purpose) • People are accountable for use of evaluation information • For greater learning • More succinct documents which allow for meta-analysis • Good practice examples identified and shared • For better accountability: • Independent aid review should provide direction/ framework for agency accountability