1 / 31

The National Evaluation Platform Approach

The National Evaluation Platform Approach. Robert E Black MD, MPH Institute for International Programs Bloomberg School of Public Health Johns Hopkins University . Outline. Why a new approach is needed National Evaluation Platforms (NEPs): The basics Country example: Malawi

silas
Download Presentation

The National Evaluation Platform Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The National Evaluation Platform Approach Robert E Black MD, MPH Institute for International Programs Bloomberg School of Public Health Johns Hopkins University

  2. Outline • Why a new approach is needed • National Evaluation Platforms (NEPs): The basics • Country example: Malawi • Practicalities and costs

  3. Most current evaluations of large-scale programs aim to use designs like this Coverage No coverage Impact No impact Program No program

  4. But reality is much more complex General socioeconomic and other contextual factors Other health programs Routine health services Interventions in other sectors Coverage Impact Program Other nutrition and health programs

  5. Mozambique How to evaluate the impact of USAID-supported programs? Traditional approach: intervention versus comparison areas Source: Hilde De Graeve, Bert Schreuder.

  6. Mozambique Simultaneous implementation of multiple programs Separate, uncoordinated, inefficient evaluations if any Inability to compare different programs due to differences in methodological approaches and indicators Source: Hilde De Graeve, Bert Schreuder.

  7. New evaluation designs are needed • Large-scale programs • Evaluators do not control timetable or strength of implementation • Multiple simultaneous programs with overlapping interventions and aims • Contextual factors that cannot be anticipated • Need for country capacity and local evidence to guide programming Lancet, 2007 Bulletin of WHO, 2009 • Sources:Victora CG, Bryce JB, Black RE. Learning from new initiatives in maternal and child health. Lancet 2007; 370 (9593): 1113-4. • Victora CG, Black RE, Bryce J. Evaluating child survival programs. Bull World Health Organ 2009; 87: 83.

  8. Lancet, 2011 National Evaluation Platforms: The Basics

  9. Builds on a common evaluation framework, adapted at country level • Common principles (with IHP+, Countdown, etc.) • Standard indicators • Broad acceptance

  10. Evaluation databases with districts as the units • District-level databases covering the entire country • Containing standard information on: • Inputs (partners, programs, budget allocations, infrastructure) • Processes/outputs (DHMT plans, ongoing training, supervision, campaigns, community participation, financing schemes such as conditional cash transfers) • Outcomes (availability of commodities, quality of care measures, human resources, coverage) • Impact (mortality, nutritional status) • Contextual factors (demographics, poverty, migration) Permits national-level evaluations of multiple simultaneous programs

  11. A single data base with districts as the rows Core Data Points from Health Sector Core Data Points from Other Sectors Nutrition Surveillance System Women’s education Rainfall patterns National Stocks data base DHS Quality Checking & Feedback to Source HMIS

  12. Types of comparisons supported by the platform approach • Areas with or without a given program • Traditional before-and-after analysis with a comparison group • Dose response analyses • Regression analyses of outcome variables according to dose of implementation • Stepped wedge analyses • In case program is implemented sequentially

  13. Evaluation platform Interim (formative) data analyses • Are programs being deployed where need is greatest? • Correlate baseline characteristics (mortality, coverage, SES, health systems strength, etc) with implementation strength • Allows assessment of placement bias • Is implementation strong enough to have an impact? • Document implementation strength and run simulations for likely impact (e.g., LiST) • How to best increase coverage? • Correlate implementation strength/approaches with achieved coverage (measured in midline surveys) • How can programs be improved? • Disseminate preliminary findings with feedback to government and partners • (All analyses at district level)

  14. Evaluation platform Summative data analyses • Did programs increase coverage? • Comparison of areas with and without each program over time • Dose-response time-series analyses correlating strength of program implementation to achieved coverage • Was coverage associated with impact? • Dose-response time-series analyses of coverage and impact indicators • Simulation models (e.g. LiST) to corroborate results • Did programs have an impact on mortality and nutritional status? • Comparison of areas with and without each program over time • Dose-response time-series analyses correlating strength of program implementation with impact measures

  15. The platform approach can contribute to all types of designs • Having baseline information on all districts allows researchers to measure and control placement bias • In real life one cannot predict which districts will have strong implementation and which ones will not • In intervention/comparison designs, it is important to document that comparison districts are free of the intervention • Collecting information on several outcomes allows assessment of side-effects of the program on other health indicators

  16. Country Example: CCM in Malawi

  17. Simultaneous implementation of multiple programs Separate, uncoordinated, inefficient evaluations (if any) Inability to compare different programs due to differences in methodological approaches and indicators

  18. Malawi CCM scale-up limits use of intervention-comparison design CCM supported in all districts beginning in 2009… … and implemented in Hard-to-Reach Areas! (March 2011) Proportion of Hard-to-Reach Areas with ≥1 Functional Village Clinic, March 2011

  19. Malawi adaptation of National Evaluation Platform approach • National Evaluation Platform design using dose-response analysis, with Dose = Program implementation strength Response = Increases in coverage; decreases in mortality • Evaluation Question: Are increases in coverage and reductions in mortality greater in districts with stronger MNCH program implementation?

  20. Platform design overview

  21. National Evaluation Platform: Progress in Malawi - 1 • Continued district level documentation in 16 districts • Pilot of cellphone interviews for community-level documentation • Stakeholder meeting in April 2011 • Full endorsement by the MOH • Partners urged to coordinate around developing a common approach for assessment of CCM and non-CCM program implementation strength • Need to allow sufficient implementation time to increase likelihood of impact • MOH addressed letter to donors requesting support for platform • Partners’ meetings in September and December 2011 to agree on plans for measuring implementation strength

  22. National Evaluation Platform: Progress in Malawi - 2 • All partners (SCF, PSI, WHO, UNICEF) actively monitoring CCM implementation in their districts • Funding secured for 16 of 28 districts; additional funding for remaining districts seems probable • Discussions under way about broadening platform to cover nutrition programs • Other countries expressing interest! Mozambique, Bangladesh, Burkina Faso, …

  23. Analysis Plan “Dose” “Response” Change in Tx rates for childhood illnesses Change in U5M • CCM implementation strength (per 1,000 pop): • CHWs • CHWs trained in CCM • CHWs supervised • CHWs with essential commodities available • Financial inputs

  24. Contextual Factors

  25. Advantageous context for NEP • strong network of MNCH partners implementing CCM • administrative structure decentralized to districts • SWAp II in development now • district-level data bases (2006 MICS, 2010 DHS, Malawi Socio-Economic Database (MASEDA)) • DHS includes approx. 1,000 households in each district

  26. Practicalities and Limitations

  27. Sample sizes must be calculatedon a country-by-country basis • Statistical power (likelihood of detecting an effect) will depend on: • Number of districts in country (fixed; e.g. 28 in Malawi) • How strongly the program is implemented, and by how much implementation affects coverage and mortality • How much implementation varies from district to district • Baseline coverage levels • Presence of other programs throughout the districts • How many households are included in surveys in each district • May require oversampling

  28. Practical arrangements • Platform should be led by credible independent partner (e.g. University or Statistical Office) • Supported by an external academic group if necessary • Steering committee with MOH and other relevant government units (Finance, Planning), Statistical Office, international and bilateral organizations, NGOs, etc.

  29. Main costs of the platform approach • Building and maintaining database with secondary information already collected by others • Requires database manager and statistician/epidemiologist for supervision • May require reanalysis of existing surveys, censuses, etc • Keeping track of implementation of different programs at district level • Requires hiring local informants, training them and supervising their work • Adding special assessments (costs, quality of care, etc) • May require substantial investments in facility or CHW surveys • Oversampling household surveys • May require substantial investments • But this will not be required in all countries

  30. Advantages Adapted to current reality of multiple simultaneous programs/interventions Identification of selection biases Promotes country ownership and donor coordination Evaluation as a continuous process Flexible design allows for changes in implementation Limitations Observational design (but no other alternative is possible) High cost particularly due to large size of surveys But cheaper than standalone surveys Requires transparency and collaboration by multiple programs and agencies Summary: Evaluation platform

  31. Thank you

More Related