1 / 34

MINISTERUL FINANŢELOR PUBLICE Autoritatea de Management pentru Cadrul de Sprijin Comunitar

MINISTERUL FINANŢELOR PUBLICE Autoritatea de Management pentru Cadrul de Sprijin Comunitar Unitatea Centrală de Evaluare. Evaluation Working Group – first training seminar for evaluation staff of 2007-2013 Romanian NSRF and Operational Programmes April 12, 2006

bcarl
Download Presentation

MINISTERUL FINANŢELOR PUBLICE Autoritatea de Management pentru Cadrul de Sprijin Comunitar

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MINISTERUL FINANŢELOR PUBLICE Autoritatea de Management pentru Cadrul de Sprijin Comunitar Unitatea Centrală de Evaluare Evaluation Working Group – first training seminar for evaluation staff of 2007-2013 Romanian NSRF and Operational Programmes April 12, 2006 Niall McCann, TA Project on Programming, Monitoring and Evaluation David Hegarty, Irish Ministry of Finance

  2. Seminar 1 – Presentation structure – what is evaluation? • Part 1 – 9.45 – 12.30 • Different types of evaluation and evaluation systems • The evaluation cycle (briefly!) • Part 2 – 13.30 – 14.30 • The evaluation criteria – relevance, effectiveness, efficiency, impact, sustainability

  3. Part 1 – What is evaluation? • Different types of evaluation and evaluation systems – part A • Definitions of evaluation • The role of evaluation in PCM • Evaluation, monitoring and audit • Purpose of evaluation • Principles of evaluation • What happens as a result of evaluation?

  4. What is evaluation? • Different types of evaluation and evaluation systems – part B • Formative v summative evaluation • Impact evaluation/experimental design v theory-based evaluation • External v internal evaluation • Centralised v decentralised evaluation

  5. Defining Evaluation • Some definitions: • “A judgement of interventions according to the results, impacts and needs they aim to satisfy” (EU Commission) • “…the process of assessing the extent to which project, programme or policy objectives have been achieved and how economically and efficiently” (Economic and Financial Evaluation: Measurement, Meaning and Management, Michael Mulreany, IPA Ireland, 1999,) • “A critical and detached look at the objectives and how they are being met” (UK Treasury) • Involves judgement on basis of criteria • More comprehensive than monitoring • Applies to policies, programmes and projects

  6. The role of evaluation in PCM • The 5 stages of the project cycle… • 1. Programming • 2. Identification • 3. Formulation • 4. Implementation (including Monitoring and Reporting) • 5. Evaluation and Audit • EuropeAid Project Cycle Management Guidelines

  7. Logframe Matrix

  8. Link between evaluation criteria and the Logframe • Logframe provides framework for evaluation • Specifies what was to be achieved (results and purpose) • How these achievements were to be verified (indicators and means of verification) • What the key assumptions were.

  9. Evaluation criteria and LF levels s u s t a i n a b i l i t y Overall objective impact Project purpose effectiveness Results efficiency Activities Means relevance Problematic Situation Logframe objective hierarchy Evaluation criteria

  10. Evaluation, monitoring and audit • Evaluation • Assessment of the efficiency, effectiveness, impact, relevance and sustainability • Monitoring • Ongoing analysis of project progress towards achieving planned results with the purpose of improving management decision making • Audit • Assessment of the (i) legality and regularity of project expenditure and income…(ii) whether project funds have been used…in accordance with sound financial management...and (iii)…for purposes intended.

  11. Monitoring is…. • …an ongoing, continuous, systematic process • Some monitoring questions • How much has been spent? • What did we get for it? • Who benefited? • Are we on track? • Uses financial and performance indicator data

  12. Audit… • …is always carried out by professionally qualified auditors • …aims to provide professional assurance • …is sometimes divided between a financial audit and a performance audit. • Performance audits are similar to evaluation but confide their studies to an analysis of efficiency, economy and effectiveness.

  13. Monitoring and Evaluation • Monitoring and evaluation are linked processes • But monitoring is ongoing • Evaluation is discrete • Evaluators use monitoring information • indicators • Monitoring acts as an early warning system • Highlights problems or areas that require evaluation

  14. Purpose of Evaluation • 3 main purposes • To address accountability concerns • To assist in the allocation of resources • To help improve programme management? • Overarching Purpose “To learn through systematic enquiry how to better design, implement and deliver public programmes and policies” (EU Guide)

  15. 4 Principles of evaluation • (EuropeAid Project Cycle Management Guidelines) • Impartiality and independence – from programming and implementation • Credibility – use of skilled and independent experts • Participation of stakeholders – to ensure different perspectives are taken into account • Usefulness – of findings and recommendations

  16. Evaluation Characteristics • Evaluations should be: • Analytical • Systematic • Reliable • Issue-oriented • User-driven

  17. What happens as a result of evaluation? • Continuation of the programme as planned? • Minor reorientation of the programme (management issues, etc.)? • Major restructuring of the programme (change of beneficiaries, etc.)? • Stopping of the programme? • Change of future projects or programmes, taking into account the lessons learned? • Change of policies and subsequent programming?

  18. Different types of evaluation and evaluation systems • Formative v summative evaluation • Impact evaluation/experimental design v theory-based evaluation • External v internal evaluation • Centralised v decentralised evaluation

  19. Formative v summative evaluation • Summative evaluation • Accountability focus • What has been achieved? • What's the value of a programme? • Is this programme worthwhile? • Formative evaluation • Development or learning focus • How can we improve performance and delivery of programme? • Both relevant and useful to public sector

  20. Impact evaluation v theory based evaluation • Impact evaluation/experimental design • Primarily concerned with asking “how do we know if the programmes and projects we are evaluating are successful?” • Related more with summative evaluation • Tries to “scientifically” combat 2 problems: • 1. what is the project responsible for? • 2. will implementers only make positive data/views available? • ED chooses an experimental group, and a control group

  21. Impact evaluation/ Experimental design • The problems with ED are: • 1. method-driven • 2. asks what works? But not why, or how? • 3. ignores side-effects • 4. elitist – ignores the views of stakeholders?

  22. Impact evaluation v theory based evaluation • Theory-based evaluation • TBE criticises the importance given to methodology • Programmes are complex, implemented in diverse environments. • Therefore we need theories on what programmes will work where and why. • ED can be inconsistent. What if the same programme in 5 cities brings different results. Contexts can be ignored. • Will a programme or project that is successful in one setting be similarly successful in an alternate, broadly similar setting? • Related more with formative evaluation

  23. Theory-based evaluation • Main problem with TBE… • If programmes can work in a variety of different ways, how do we choose a hypothesis which can be tested in the first place?

  24. External v internal evaluation • Strengths of internal evaluation model • Helps to improve evaluation demand better TOR, quality control • Better than external for some work • Indicators • Formative evaluation, how can programme be improved? • Weaknesses • Will not have sectoral expertise for some work • May not be perceived as independent • Difficult to recruit and retain skilled staff

  25. External v internal evaluation • Strengths • Flexibility • Wider range of expertise • Better at summative evaluation, i.e., is the programme worthwhile? • Weaknesses • Can be expensive • Doesn’t help develop internal capacity • Commercial pressures may limit independence

  26. Centralised v decentralised evaluation • Centralised model (Ireland, CSF 2000-2006) • Organized by MOF or central evaluation unit • Ensures focus on cross-OP issues • Consistent approach • Lower evaluation costs for MAs • Unit looks after technical work (TOR, indicators) • MAs can focus on core management tasks • OP evaluations more credible because external to MA • Decentralised (Ireland, CSF 1994-1999) • Responsibility of each OP MA • Allows MAs to tailor evaluations specifically to their needs • Can call on MACSF ECU for technical expertise • Proposed model for Romania 2007-2013 is mix between centralised/decentralised!

  27. The Evaluation Cycle • Before (ex-ante evaluation) • Aim is to improve allocation of resources and programme design (next presentation) • During (interim evaluation) • External developments and their implications? • Is the programme meeting its objectives? • Can we improve programme management? • After (ex-post evaluation) • What has been achieved? • What difference did it make?

  28. Part 2 – Evaluation Criteria • What's the basis for evaluation judgements? • Common EU approach has 5 criteria • Relevance (including Rationale) • Effectiveness • Efficiency • Impact • Sustainability

  29. Relevance • Are socio-economic development programmes relevant to the needs of stakeholders? • Have circumstances changed since the start of the programme? • Do these changes render the programme irrelevant? • Why is public money delivering the programme? • Could the private sector not meet the needs of the stakeholders? • Where is the market failure?

  30. Effectiveness • Are the programmes achieving their objectives? • Are outputs being produced that correspond to the needs as expressed in the programme design?

  31. Efficiency • Are the programmes providing value-for-money? • Could the outputs be produced cheaper? • Are unit costs too high? • Even though targets may be reached, are they being reached in a way that makes the programmes too costly to continue?

  32. Impact • What changes as a result of the programmes? • Are there benefits to the programmes simply beyond their outputs?

  33. Sustainability • Will the effects of the programmes, or indeed the programmes themselves, have a life beyond their implementation date? • Will alternative sources of funding be found?

  34. Summary/Concluding Points • Relationship between monitoring and evaluation • Purpose of evaluation …. • Need to have an evaluation framework • Evaluation questions or criteria • Focus varies over policy/programme cycle

More Related