1 / 1

Introduction

MONITORING & EVALUATING PROJECTS ACROSS MULTIPLE COUNTRIES Selvaggio, MP ; Mangxaba JW ; Tsigoida, M Khulisa Management Services , P.O. Box 923, Parklands 2121, South Africa. 26 - 7th Avenue Parktown North Johannesburg South Africa 2193. Tel: +27 11 447-6464 Fax: +27 11 447-6468

Download Presentation

Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MONITORING & EVALUATING PROJECTS ACROSS MULTIPLE COUNTRIES Selvaggio, MP; Mangxaba JW; Tsigoida, MKhulisa Management Services, P.O. Box 923, Parklands 2121, South Africa 26 - 7th Avenue Parktown North Johannesburg South Africa 2193 Tel: +27 11 447-6464 Fax: +27 11 447-6468 Web: www.khulisa.com Accurately Measuring Progress Introduction Bilateral and multi-lateral donors often fund a single project effort in multiple countries. Such projects have similar goals and objectives which relate to the overall multi-country programme, but each country’s sub-project is generally encouraged to contextualise their implementation to local situations. However, a common M&E framework works best to “tell the story” of the overall project effort. Efforts to contextualise the M&E framework to each country’s situation can lead to a wide variety of problems with aggregation of data and reporting. • Khulisa’s Experience with Multi-Country Monitoring & Evaluation • Khulisa has been involved in M&E activities for numerous projects that are implemented in multiple countries, including : • RECLISA – Reduction of Exploitative Child Labour in Southern Africa. This project funds work in Botswana, Lesotho, Namibia, South Africa, and Swaziland. Khulisa’s role is to establish the project monitoring system for reporting on core indicators, as well as country-specific indicators at output level. Khulisa also verifies the data reported from all countries, and conducts evaluations on the source data to ensure that this is accurate and valid. • APPLE – AIDS Prevention, Positive Living, and Empowerment Project. This project was implemented in Malawi and Mozambique. Khulisa’s role was to monitor project Logframe indicators at activity, output, and outcome levels. • Research and a thematic study on Governance, Management and Accountability at Secondary School Level in Africa. Three countries (South Africa, Zambia, and Senegal) were chosen as case studies with the idea of identifying the best and most promising practices for Governance, Management and Accountability in these countries. • Assessment of SADC Higher Learning Institutions to be Centres of Specialization in Education Policy Development. Over a period of two years, three centres were to be established and 60 ministry staff members from across the SADC region were to be trained in Education Policy Development. Khulisa’s role was to assess the applying SADC institutions through administering questionnaires and site visits. • Issues, Concerns, and Lessons Learned • EVALUATIONS and RESEARCH: • It is important to note that the criteria and/or parameters of source data at government institutions (health and education) can differ from country to country, and this can cause difficulties in data collection. For example, the criteria for enrolment in ART programmes, or the definitions of child labour are not the same in each country. • Some countries require ethics clearance for research and evaluation, including project-specific surveys. It is important to understand when this is required so that adequate planning can be undertaken. • To the extent possible the same data collection tools (or same items in the tools) should be used for all countries and sites in order to easily aggregate the data at high levels to tell the story of the overall project. • MONITORING SYSTEMS: • The Governments’ own health information systems and education information systems need to be utilised and built upon for programme monitoring. The project(s) should not attempt to replace the government’s data collection system. • M&E Systems, Logframes, and their indicators must be defined broadly enough to allow OUTPUT and OUTCOME measures to be readily aggregated. • Indicator definitions must be specific but at the same time, applicable to all governments or country situations. • Multi-country programmes must have common indicators that are used in all countries so that aggregation can occur to “tell the story of the overall project”. These common indicators must be clearly defined and vetted with implementation partners in all countries to ensure that they are measurable in all countries • Sometimes indicator data is combined with other data which differs slightly in definition. When this is required, the unit of measure must be the same. For example, the detailed definitions of “No. of children-at-risk enrolled” can differ from country to country-- one country might emphasize educational enrolment, and another country might emphasize PSS service -- but the unit of measure (i.e. ‘child enrolled’) is the same from country to country, • M&E systems, Logframes, and indicators must be revisited regularly throughout the implementation of the project to correct any deficiencies with the framework itself or with the definitions of common indicators • DATA FLOW / TECHNOLOGY ISSUES IN THE MONITORING SYSTEM Generally, we have found it best to design the data management system as a combination of paper-based and computer-based steps: • For collection of data at source, manual (or paper-based) record keeping can be used at sites. Generally, it is not realistic to expect sites to have computer infrastructure or capacity. • Monthly summary forms of project outputs can be compiled at site and sent to the project office as a paper form or electronically. Aggregation of the monthly summary data from each site should be done electronically (through ACCESS or excel spreadsheets) • Even if monthly reports are not required by the donor, the best practice is for each country project office to get receive monthly summary reports from sites. These can then be aggregated into quarterly or semi-annual reports to the donor. • ANALYSIS: • In analysing data from such projects, the main goal is to aggregate outcome- and output-level data to “tell the story” of the overall multi-country project. In aggregating indicator data, avoid using indicators expressed as percentages, which are prone to errors and miscalculations. Rather, keep indicators as counts so that they can be more readily aggregated across countries. However, when percentage values must be used, the aggregation of the multi-country result can only be computed using the raw denominator and numerator values from each country programme.

More Related