1 / 25

Mattea Stein Quasi Experimental Methods I

Mattea Stein Quasi Experimental Methods I. What we know so far. Aim: We want to isolate the causal effect of our interventions on our outcomes of interest Use rigorous evaluation methods to answer our operational questions

yamin
Download Presentation

Mattea Stein Quasi Experimental Methods I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mattea SteinQuasi Experimental Methods I

  2. What we know so far Aim: We want to isolate the causal effect of our interventions on our outcomes of interest • Use rigorous evaluation methods to answer our operational questions • Randomizing the assignment to treatment is the “gold standard” methodology (simple, precise, cheap) • What if we really, really (really??) cannot use it?! >> Where it makes sense, resort to non-experimental methods

  3. Non-experimental methods • Can we find a plausible counterfactual? • Natural experiment? • Every non-experimental method is associated with a set of assumptions • The stronger the assumptions, the more doubtful our measure of the causal effect • Question our assumptions • Reality check, resort to common sense!

  4. Example: Matching Grants Program • Principal Objective • Increase firm productivity and sales • Intervention • Matching grants distribution • Non-random assignment • Target group • SMEs with 1-10 employees • Main result indicator • Sales

  5. (+) Impact of the program Illustration: Matching Grants - Randomization (+) Impact of externalfactors 5

  6. Illustration: Matching Grants – Difference-in-difference « Before» differencebtwn participants and nonparticipants « After » differencebtwn participants and non-participants >> What’s the impact of our intervention? 6

  7. Difference-in-Differences Identification Strategy (1) Counterfactual: 2 Formulations that say the same thing • Non-participants’ sales after the intervention, accounting for the “before” difference between participants/nonparticipants (the initial gap between groups) • Participants’ sales before the intervention, accounting for the “before/after” difference for nonparticipants (the influence of external factors) • 1 and 2 are equivalent

  8. Data – Example

  9. Impact=0.4 “After”-difference: P08-NP08=1.4 “Before”-difference: P07-NP07=1.0

  10. Difference-in-DifferencesIdentification Strategy (2) Underlying assumption: Without the intervention, sales for participants and non participants would have followed the same trend >> Graphic intuition coming…

  11. Impact=0.4 “After”-difference: P08-NP08=1.4 “Before”-difference: P07-NP07=1.0

  12. True Impact=-0.3 Estimated Impact =0.4

  13. Summary • Assumption of same trend very strong • 2 groups were, in 2007, producing at very different levels • Question the underlying assumption of same trend! • When possible, test assumption of same trend with data from previous years

  14. Questioning the Assumption of same trend: Use pre-pr0gram data >> Reject counterfactual assumption of same trends !

  15. Questioning the Assumption of same trend: Use pre-pr0gram data >>Seems reasonable to accept counterfactual assumption of same trend ?!

  16. Caveats (1) • Assuming same trend is often problematic • No data to test the assumption • Even if trends are similar the previous year… • Where they always similar (or are we lucky)? • More importantly, will they always be similar? • Example: Other project intervenes in our nonparticipant firms…

  17. Caveats (2) • What to do? >> Be descriptive! • Check similarity in observable characteristics • If not similar along observables, chances are trends will differ in unpredictable ways >> Still, we cannot check what we cannot see… And unobservable characteristics might matter more than observable (ability, motivation, patience, etc)

  18. Matching Method + Difference-in-Differences (1) Match participants with non-participants on the basis of observable characteristics Counterfactual: • Matched comparison group • Each program participant is paired with one or more similar non-participant(s) based on observable characteristics >> On average, matched participants and nonparticipants share the same observable characteristics (by construction) • Estimate the effect of our intervention by using difference-in-differences

  19. Matching Method (2) Underlying counterfactual assumptions • After matching, there are no differences between participants and nonparticipants in terms of unobservable characteristics AND/OR • Unobservable characteristics do not affect the assignment to the treatment, nor the outcomes of interest

  20. How do we do it? • Design a control group by establishing close matches in terms of observable characteristics • Carefully select variables along which to match participants to their control group • So that we only retain • Treatment Group: Participants that could find a match • Comparison Group: Non-participants similar enough to the participants >> We trim out a portion of our treatment group!

  21. Implications • In most cases, we cannot match everyone • Need to understand who is left out • Example Matched Individuals Portion of treatment group trimmed out Nonparticipants Participants Score Wealth

  22. Conclusion (1) • Advantage of the matching method • Does not require randomization

  23. Conclusion (2) • Disadvantages: • Underlying counterfactual assumption is not plausible in all contexts, hard to test • Use common sense, be descriptive • Requires very high quality data: • Need to control for all factors that influence program placement/outcome of choice • Requires significantly large sample size to generate comparison group • Cannot always match everyone…

  24. Summary • Randomized-Controlled-Trials require minimal assumptions and procure intuitive estimates (sample means!) • Non-experimental methods require assumptions that must be carefully tested • More data-intensive • Not always testable • Get creative: • Mix-and-match types of methods! • Address relevant questions with relevant techniques

  25. Thank you Financial support from: Bank Netherlands Partnership Program (BNPP), Bovespa, CVM, Gender Action Plan (GAP), Belgium & Luxemburg Poverty Reduction Partnerships (BPRP/LPRP), Knowledge for Change Program (KCP), Russia Financial Literacy and Education Trust Fund (RTF), and the Trust Fund for Environmentally & Socially Sustainable Development (TFESSD), is gratefully acknowledged.

More Related