1 / 73

The logic of C ounterfactual I mpact E valuation

The logic of C ounterfactual I mpact E valuation. To understand counterfactuals. It is necessary to understand impacts. Impacts differ in one fundamental way from outputs and results. Outputs and results are observable quantities. Can we observe an impact?. No, we can ’ t.

Download Presentation

The logic of C ounterfactual I mpact E valuation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The logic of CounterfactualImpactEvaluation

  2. To understand counterfactuals It is necessary to understand impacts

  3. Impactsdiffer in onefundamental way from outputs and results Outputs and results are observable quantities

  4. Can weobserve an impact? No, we can’t

  5. As output indicatorsmeasureoutputs, resultindicatorsmeasureresults, so impact indicatorsmeasureimpacts Sorry, they don’t

  6. Almost everything about programmes can be observed (at least in principle): outputs (beneficiaries served, activities done, training courses offered, KM of roads built, sewages cleaned) outcomes/results (income levels, inequality, well-being of the population, pollution, congestion, inflation, unemployment, birth rate)

  7. Whatisneeded for M&E of outputs and results are BITs (baselines, indicators, and targets)

  8. Unlike outputs and results, to define, detect, understand, and measure impacts one needs to deal with causality

  9. “Causality is in the mind” J.J. Heckman

  10. Why this focus on causality? Because, unless we can attribute changes (or differences) to policies, we do not know whether the intervention “works”,“for whom” it works, and even less “why” it works (or does not) Causal questions represents a bigger challenge than non causal questions (descriptive, normative, exploratory)

  11. The social science scientific community definesimpact/effectas “the difference between a situation observed after a stimulus has been applied and the situation that would have occurred without such stimulus”

  12. A very intuitive example of the role of causality in producing credible evidence for policy decisions

  13. Does playing chess have an impact on math learning?

  14. Policy-relevant question: Should we make chess part of the regular curriculum in elementary schools, to improve mathematics achievement? Which kind of evidence do we need to make this decision in an informed way? We can think of three types of evidence, from the most naive to the most credible

  15. 1. The naive evidence:pre-post difference • Take a sample of pupils in fourth grade • Measuretheirachievement in mathat the beginningof the year • Teachthem to play chessduring the year • Test themagainat the end of the year

  16. Results for the pre-post difference Pupils at the beginning of the year The same pupils at the end of the year Average score = 40 points Average score = 52 points Difference = 12 points = + 30% Question: what are the implications for making chess compulsory in schools? Have we proven anything?

  17. Can weattribute the increase in test score to playingchess? OBVIOUSLY NOT The data tellusthat the effectisbetween zero and 12 points • Thereisnotdoubtthatmany more factors are at play • So we must dismiss the increase in 10 pointsasunable to tellusanythingabout impact.

  18. The pre-post great temptation • The pre-post comparisons have a great advantage: they seem kind of obvious (the “pop” definition of impact coincides with the pre-post difference) • Particularly when the intervention is big, and the theory suggests that the outcomes should be affected • This is not the case here, but we should be careful in general to make causal inference based on pre-post comparisons

  19. The risky alternative:with-without difference Impact = difference between treated and not treated? Compare math test scores for kids who have learned chess by themselves and kids who have not

  20. Not really Average score of pupils who already play chess on their own (25% of the total) = 66 points Average score of pupils who DO NOT play chess on their own (75% of the total) = 45 points Difference = 21 points = + 47% This difference is OBJECTIVE, but what does it mean, really? Does it have any implication for policy?

  21. This evidence tells us almost nothing about making chess compulsory for all students The data tellusthat the effect of playingchessisbetween zero and 21 points. Why? The observeddifferencecouldentirely be due to differences in mathematicalabilitythatexistbefore the courses, between the twogroups

  22. 66 – 45: real effect or the fruit of sorting? Math test scores CS Play chess Does it have an impact on? DIRDIRE Math innate ability DIRECT INFLUENCE SELECTION PROCESS Ignoring math ability could severly bias the results, if we intend to interpret them as causal effect

  23. Counterfeit Counterfactual Both the raw difference between self-selected participants and non-participants, and the raw change between pre and post are a caricature of the counterfactual logic In the case of raw differences, the problem is selection bias (predetermined differences) In the case of raw changes, the problem is maturation bias (a.k.a. natural dynamics)

  24. The modern way to understand causality is to think in terms ofPOTENTIAL OUTCOMES Let us imagine we know the score that kids would get if they played andthey would get if they did not

  25. Let’s say there are three levels of ability Kids in the top quartile (top 25%) learn to play chess on their own Kids in the two middle quartiles learn if they are taught in school Kids in the bottom quartile (last 25%) never learn to play chess

  26. High mathability 25% Play chess by themselves Midmathability 50% Unlesstaught in school Do not play chess Lowmathability 25% Neverlearn to play

  27. Impact = gain from playingchess Ifthey do NOT play chess Ifthey do play chess Potential outcomes High mathability 66 54 40 56 40 48 10 0 6 Midmathability Lowmathability

  28. Observed outcomes For thosewho play chess For thosewho do not play chess High mathability 66 the difference of 21 pointsis NOT an IMPACT, itis just an OBSERVED difference Midmathability 48 45 Mid/Lowmathabilitycombined Lowmathability 40

  29. The problem: we do not observe the counterfactual(s) • For the treated, the counterfactualis 56, butwe do notseeit • The true impact is10, butwe do notseeit • Stillwecannot use 45, thatis the untreatedobservedoutcome Wecan think of decomposingthe 68-45 differenceas the sum of the true impact on the treated and the effect of sorting

  30. Decomposing the observeddifference If do not play chess If play chess High mathability 56 66 = 10 Impact for players Low/midmathability 45 =21 Observeddifference = 11 preexistingdifferences 21 = 10 + 11

  31. 21 = 10 + 11 Observeddifferences = Impact + Preexistingdifferences (selectionbias) The heart of impact evaluationisgettingrid of selectionbias, by usingexperiments or by using some non-experimentalmethods

  32. Experimental evidence to the rescue Schools get a free instructor to teachchess to oneclass, iftheyagree to selectthe classat random among the fourth grade classes Nowwehave the following situation

  33. Results of the randomized experiment Pupils in the selected classes Pupils in the excluded classes Average score of NON chess players = 52 points Average score of randomized chess players = 60 points Difference = 8 points Question: what does this difference tell us?

  34. The results tell us that teaching chess truly improves math performance (by 8 points, about 15%) Thus we are able to isolate the effect of chess from other factors (but some problems remain)

  35. Ifthey do play chess Ifthey do NOT play chess ATE Composition of population High ability 66 54 54 40 48 48 56 40 100% 50% 25% 25% Midability Lowability Averages Impact Impact = 54 – 48 = 6 Average Treatment Effect

  36. Math test scores DIRDIRE DIRDIRE Play chess Math ability Note that the experiment does solve all the cognitive problems related to policy design: for example, it does identify impact heterogeneity (“for whom it works”)

  37. The ATE is the average effect if every member of the population is treated Generally there is more policy interest in Average Treatment Effect on the Treated ATT = 10 the chess example, while ATE = 6 (we ran an experiment and got an impact of 8. Can you think why this happens?)

  38. Schools thatvounteered Schools that DID NOT vounteer True impact High ability 50% 10 50% 6 Midability 50% Lowability 50% 0 EXPERIMENTAL mean of 66 and 54 = 60 CONTROL mean of 56 and 48 = 52 Internalvalidity Little externalvalidity Impact = 60 – 52 = 8

  39. Lessons learned Impacts are differences, but not all differences are impacts Differences (and changes) have many causes, but we do not need to undersand all the causes We are especially interested in one cause, the policy, and we would like to eliminate all the counfounding causes of the difference (or change) Internal vs. External validity

  40. An example of a real ERDF policy Grants to small enterprises to invest in R&D

  41. To design an impact evaluation, oneneeds to answerthreeimportantquestions Impact of what? 2. Impact for whom? 3. Impact on what?

  42. R&D EXPENDITURES AMONG THE FIRMS RECEIVING GRANTS Is 10.000 the true average impact of the grant?

  43. The fundamental challenge to this assumption is the well known fact that things change over time by “natural dynamics” How do we disentangle the change due to the policy from the myriad changes that would have occurred anyway?

  44. IS 15.000 THE TRUE IMPACT OF THE POLICY?

  45. WITH-WITHOUT (I.A.: NO PRE-INTERVENTION DIFFERENCES)

  46. DECOMPOSITION OF WITH-WITHOUT DIFFERENCES

  47. DECOMPOSITION OF WITH-WITHOUT DIFFERENCES

  48. We cannot use experiments with firms, for obvious (?) political reasons The good news is that there are lots of non-experimental counterfactual methods

More Related