1 / 20

Impact Evaluation in the Real World

Impact Evaluation in the Real World. One non-experimental design for evaluating behavioral HIV prevention campaigns . Implementation realities. BCC program: H as already started Builds on the previous campaign (not the first one addressing behaviour )

king
Download Presentation

Impact Evaluation in the Real World

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact Evaluation in the Real World One non-experimental design for evaluating behavioral HIV prevention campaigns

  2. Implementation realities • BCC program: • Has already started • Builds on the previous campaign (not the first one addressing behaviour) • Is being rolled out in communities that have other HIV prevention interventions • There are endogenous ‘interventions’ (e.g. conversations on the way to school, or in the waiting line at the clinic) • Diffusion is a good thing • Cannot (and does not want to) control implementation

  3. Difference in Differences Example

  4. Effect = 3.47 – 11.13 = - 7.66 Participants 66.37 – 62.90 = 3.47 57.50 - 46.37 = 11.13 Non-participants

  5. Effect = 8.87 – 16.53 = - 7.66 Before 66.37 – 57.50 = 8.87 62.90 – 46.37 = 16.53 After

  6. Counterfactual assumption: Without intervention participants and nonparticipants’ pregnancy rates follow same trends

  7. 74.0 16.5

  8. 74.0 -7.6

  9. Matching Example

  10. Implementation realities • BCC program: • Has already started • Builds on the previous campaign (not the first one addressing behaviour) • Is being rolled out in communities that have other HIV prevention interventions • There are endogenous ‘interventions’ (e.g. conversations on the way to school, or in the waiting line at the clinic) • Diffusion is a good thing • Cannot (and does not want to) control implementation

  11. What do we need to know? • Can a specific set of communication messages manipulate a specific set of sexual behaviors? • What magnitude of behaviour change will give what magnitude of changes in incidence?

  12. Approach decided on • NON-intervention approach: • We are NOT trying to prove that one campaign works….. BUT we are trying to see whether a specific set of messages work, irrespective of the method of delivery or transmission of the method • Observation approach • We are not trying to force one intervention to work; not focusing on implementation of one intervention

  13. So what WILL we do? • Non-experimental design • Researcher does not manipulate the independent variable (message exposure) • No control group in the community; create the control group statistically through matching • Collection of exposure, behavioural and biological data from random sample of individuals and their sexual partners • Develop a measurement of intensity of exposure (‘doses’ of exposure) • Determine the probability of having a specific dose of exposure • Match individuals with similar covariates, but different doses of exposure • Compare biological and behavioural outcomes

  14. So what WILL we do? • Survey to measure demographic covariates (or use pop survey data) • Measure type and intensity of exposure to messages • Different doses of exposure to MCP campaign messages among the population • Detailed measurement of method of exposure to messages during surveys: Direct channels (# times heard messages on radio…); AND indirect channels (conversation with friends, relatives, etc.; as shown to be important in accounting for HIV declines in Uganda) • Construct message exposure scale (low vs. high, or more detailed) using statistical techniques (e.g., principal components analysis) • Every individual has a single score for message exposure

  15. So what WILL we do? • Survey to measure exposure, behavioural outcomes, couple and social network norms and HIV incidence amongst random selection of individuals • Nested sub-study to trace partners of those who reported one or more sexual partners, and collect same data from them • Parallel measurement of ‘social norms’ – hearsay ethnography or other methods

  16. So what WILL we do? • Analyses • Use covariates to calculate an individual’s propensity (scalar summary of all covariates) to receive a specific ‘dose of treatment’ (message exposure scale) • Match pairs of participants (index cases and their sexual partners) with similar propensity scores and different doses of treatment (control and treatment groups) • Now, we can calculate impact (behavioural and biological outcomes) by comparing the means of outcomes across participants and their matched pairs • Modeling • Has the density of the sexual network changed over time, and to what extent has it changed? • How 'much' behaviour change is needed, over what period of time in how many individuals, to bring about what levels of reductions in new infections • What are the individual and the combined effects of MC, ART, increased condom use, and MCP reductions, respectively, on the number of new infections • What is the ideal 'mix' of interventions to implement?

  17. Density of scores for high exposure ‘Low exposure’ Density of scores for low exposure ‘High exposure’ Low probability of exposure given X High probability of exposure given X

  18. What we will know • Can a specific set of communication messages (delivered in different ways) manipulate a specific set of sexual behaviors? • What magnitude of behaviour change will give what magnitude of changes in incidence?

More Related