1 / 58

Natural Experiments and Firms

Natural Experiments and Firms. Rocco Macchiavello CAGE Summer School July, 14 th 2010 E-mail : r.macchiavello@warwick.ac.uk. 1. Introduction: review of Difference in Difference » basic idea » further issues - further remarks on: » event studies, falsification tests, examples

kendra
Download Presentation

Natural Experiments and Firms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Natural Experiments and Firms Rocco Macchiavello CAGE Summer School July, 14th 2010 E-mail: r.macchiavello@warwick.ac.uk

  2. 1. Introduction: review of Difference in Difference » basic idea »further issues - further remarks on: » event studies, falsification tests, examples » regression discontinuity 2. Examples from the literature: - a micro ‘experiment’ [regulation and credit] - another micro ‘experiment’ [entry and spillover] Goal: give you an informal (i.e., non technical), practice oriented, list of boxes to check when you are planning to use to natural experiment in a particular setting ...

  3. Let me jump straight into the problem, and leave for the end of the class a discussion of methods in the study of firms in LDC Suppose we want to evaluate the effect of a program. I use the word program in a very broad way: - a policy, - giving money, redeeming debts, giving a subsidy, - the entry of a competitor, etc ... - a shock (of intrinsic interest, or that you use to identify a response) Let us assume that the program was not randomly offered to (let alone taken up by) a subset of the observational units. In other words: you do not have an experiment (and, when thinking about larger firms, you typically won’t !)

  4. Suppose you only have data post treatment, i.e., after the program has been implemented. Then just give up !! Even if you have a “control” group that has not received the program, you won’t be able to show it is a good control group since you can’t show the two groups looked very similar before the program. So, at the very minimum, you want to use pre- and post- treatment data. Suppose you just have pre- and post- treatment data, but no control group (i.e., all the units you observe were affected by the program). Then, again, just give up!! There are too many other things that might have changed in between before and after and you can’t show the effects are due to the program.

  5. So, at the very veryvery minimum, you need 1. data from before and after, 2. a control group This is the idea beyond differences in differences (DID)

  6. Outcome Treatment Problem is that between the two periods many other things might have changed Y1T Y0T Time 0 1

  7. Outcome Treatment Solution: find a control group that is unaffected by the program but otherwise behaves exactly the same ... Wait a minute? How do I know it behaves the same? Y1T Y0T Y1C Y0C Time 1 0

  8. Outcome Treatment How do you make sure that the control is a valid group? Usually check for equality in pre-existing trends !! Not levels !! Time 1 0

  9. Outcome Treatment This is the difference in difference estimator Time 1 0

  10. You can do this in a regression format: Pre-treatment outcome in C group

  11. Outcome Treatment β₁ Time

  12. You can do this in a regression format: Pre-treatment outcome in T group

  13. Outcome Treatment β₂ β₁ Time

  14. You can do this in a regression format: Post-treatment outcome in C group

  15. Outcome Treatment β₃ β₃ β₁ Time

  16. You can do this in a regression format: Post-treatment outcome in T group

  17. Outcome Treatment β₄ β₃ β₂ β₁ Time

  18. You can do this in a regression format:

  19. Outcome Treatment β₄ β₃ β₂ β₃ β₁ Time

  20. Easy to calculate standard errors In principle you can do this by OLS. In practice, you have to take into account several issues: - autocorrelation over time, - (potentially) spatial correlation across units Two ways of dealing with this: - if you have very strong prior you can explicitly model st. err. (Conley (2007)) - if you do not, then cluster (i.e., allow arbitrary correlation patterns within cluster (see, e.g., Bertrand et al. (QJE 2002), Cameron, Guelbach, Miller (2009)) What is the cost of clustering then?

  21. Easy to calculate standard errors In principle you can do this by OLS. In practice, you have to take into account several issues: - autocorrelation over time, - (potentially) spatial correlation across units Two ways of dealing with this: - if you have very strong prior you can explicitly model st. err. (Conley (2007)) - if you do not, then cluster (i.e., allow arbitrary correlation patterns within cluster (see, e.g., Bertrand et al. (QJE 2002), Cameron, Guelbach, Miller (2009)) What is the cost of clustering then? Loss of efficiency!

  22. Easy to include multiple periods - you want multiple pre-periods to validate identification - you want multiple post-period to assess ST vs LT effect Control for other variables (e.g., trends) - If Identification strategy is valid, this should only affect the residual variance, i.e., standard errors, not coefficient Study treatments with different intensities - the treatment variable can be, e.g., amount of a loan, subsidy or tax rate

  23. Remark The idea that interaction terms can be used to identified channels is broader. An example: Rajan and Zingales (1998) Suppose you want to know whether financial development leads to growth. Hard problem if you just use cross-country data. Intuition: if FD → growth, it should do so relatively more in industries that require a lot of finance Interact FD in the country with exogenous component of demand for finance in the industry (e.g., proxy for a technological characteristic) Which further falsification test? - is it really FD? - is it really demand for finance? Same logic can be applied in the context of DiD to get DiDiD

  24. So far we have referred to a generic “program”. Depending on what you are trying to “evaluate”, think about some of the following: - endogenous selection (e.g., a program was offered but not every eligible took it up – intention to treat) => this can lead to bias - anticipation effects (e.g., a major policy change was approved, but after a lengthy discussion, ...) => this can lead to bias * this is problematic because you’d be tempted to learn from unanticipated shocks, but then it might be hard to extrapolate - short-term vs. long term effects (if you have many periods, always check graphically non-parametrically) - harvesting effect (e.g., effect of heat waves on elderly) - spillover effects (e.g., industry equilibrium) - heterogeneous effects (not a problem per se, but the parameter you are identifying might not be relevant)

  25. This is a cheap slide. Often, soft-sceptics of your work (including yourself!) can be convinced by a placebo test What is the idea of a placebo test? The idea is to show that some outcome variable that should have not changed did not change. Stated in this form, it is not very clever. Consider examples: - Regulations to sectors, infrastructures to regions, etc. - Margins that could not adjust (e.g., in the short run, product Xs as given) - Equality of trends before the treatment

  26. The idea is very similar to a DiD, though it is implemented differently across literature. In Finance, for instance, people look at the effects of news or events on stock market returns Two issues: - you need to construct a comparable portfolio, i.e., a control group - the effect should appear “immediately”, so that you need to show results around a “window”. This is clearly related to evaluation of (short-run) shocks using synthetic control groups (Abadie et al. (various)) Fisman (AER 2004), Guidolin and La Ferrara (AER 2007) are good examples.

  27. Guns N’ Roses: the effects of ethnic violence on the Kenya Flower Industry A paper of mine, written with two co-authors. I present few graphs / tables simply to illustrate the checks I have suggested. Basically the paper look at the effect of a shock (ethnic violence) on firms in Kenya. Divide the country into two: a set of treated firms, a set of control firms. A short-run shock to the supply function: data on before, during, and after So, what do we need?

  28. FIRMS ≠ ACROSS REGIONS

  29. VIOLENCE EFFECTS – NO CATCH UP Panel A: Expanding Window Panel B: Rolling Window

  30. 1. validate identification assumption - done: but trends, not levels! - can you do “over-identification” tests? 2. rule out harvesting effect / catch-up 3. no anticipation survey + detailed knowledge of production process / institutional setting in the industry 4. spillover effects?

  31. An alternative (though the basic idea is very similar) way to go is through a regression discontinuity. RD comes in two styles: sharp: treatment status is a deterministic and discontinuous function of one covariate fuzzy: probability of treatment is discontinuous at a certain point of one covariate => use discontinuity as an IV

  32. In its simplest form you just need to add an indicator variable at the discontinuity.

  33. But what about this? Well, if the trend relation is non-linear, you can still do RD by fitting a polynomial in X (and allowing the polynomial to differ on both sides)

  34. But pay attention! It is easy to get a ... To avoid this, just look at data in the neighbourhood of the discontinuity.

  35. ... but the ultimate challenge to a successful RD design is sorting. That is, you can’t do RD if firms sort around the discontinuity Can you think of an example? Yes! Many countries have regulations of the form “pay taxes / costs if employ more than Z employees” Can you look around Z to understand the effects of the regulation? No! Firms sort! How do we know it? Well, typically the distribution of firm size is discontinuous at that point. Urquiola & Verooghen (AER 2009) very nice paper on this.

  36. Banerjee and Duflo (2008) => use changes in regulation to identify credit constraints Greenstone, Hornbeck and Moretti => use entry of large plant to identify spillovers NOTE: - both are DiD papers (not RD)

  37. Are firms credit constrained? Knowing the answer to this question is important for many reasons (e.g., understanding aggregate differences in TFP, give policy recommendations, ...) When is a firm credit constrained? If MPK > r (the interest rate paid on the marginal unit borrowed) So, to answer the question we just need to get an estimate of the MPK. This is more easily said than done. In fact, it is an extremely difficult problem. Not least because inputs (e.g., capital) are endogenously chosen by firms that know more than the econometrician about their productivity => error term is correlated with dependent variables. One solution is to hand out money randomly. De Mel, McKenzie & Woodruff (2008) randomly allocated capital (about $200) to microenterprises and find about 5% returns per month

  38. Another sector where it is relatively easy to infer the importance of credit constraints is agriculture (use random variation in whether) Great. But still, it is important to know whether larger firms are credit constrained – if anything because the bulk of capital is invested there (but also for other reasons). Banerjee and Duflo (2008) look at this issue exploiting a natural experiment. The experiment is given by changes in law directing (subsidized) credit to priority sector in India. The priority sector is defined w.r.t. to the capital invested in the firm.

  39. This is an interesting paper on many different levels: 1. clever design to tackle an otherwise difficult problem, 2. simple theory that does 2 things: - derive the test for credit constraints (intuitive) - provide some guidance on how to interpret results 3. both reduced form and IV results: nice example of when a natural experiment can be used to identify a structural parameter 4. effectively they have 2 experiments (expansion and contraction in the rule) which allows them to test the validity of the identification assumption

  40. This is an interesting paper on many different levels: 1. clever design to tackle an otherwise difficult problem, 2. simple theory that does 2 things: - derive the test for credit constraints (intuitive) - provide some guidance on how to interpret results 3. both reduced form and IV results: nice example of when a natural experiment can be used to identify a structural parameter 4. effectively they have 2 experiments (expansion and contraction in the rule) which allows them to test the validity of the identification assumption

  41. All banks (public and private) are required to lend at least 40% of their net credit to the “priority sector", which includes agriculture, agricultural processing, transport industry, and small scale industry (SSI). If banks do not satisfy the priority sector target, they are required to lend money to specific government agencies at very low rates of interest (i.e., policy is binding) Change definition of SSI sector: January 1998: investment in plant and machinery < Rs. 6.5 to < Rs. 30 million, January 2000: to < Rs. 10 million Reform should lead to an increase in lending to the larger firms newly included possibly at the expense of the smaller firms, and viceversa for the second change.

  42. Focus on demand for subsidized bank credit. Consider a firm with limited access to cheap bank credit that can also borrow from the market at a higher rate. How increased access to cheap bank credit affects the market borrowing, revenues and profits of the firm? R = f(k) rupees of revenue after a suitable period, where k is working capital, f(k) is increasing and concave. Definition: Firm is credit constrained if there is no interest rate such that the amount that the firm wants to borrow at that rate is equal to an amount that all the lenders taken together are willing to lend at that rate. Bank rate r, market rate i, r>i

  43. Policy means that, at the same rate, the bank now offers more. If firms accepted the additional credit then they are credit rationed with the bank. But this does not imply they are credit constrained. They might have not borrowed more at the market interest rate. Since i<r, a non-constrained firm has MPK = i and, therefore, all new cheap credit from the bank goes to pay down other debt. No increase in invested capital, production, sales, etc. Firms profit increase because of the subsidy. Output could increase only if the priority sector credit fully substitutes for market borrowing. Under credit constraints, instead, the firm output increase but the firm still borrow from the market. Note: logic of the test fails if the firm cannot pay down debt in the market and/or the choice is not at the margin (i.e., subsidy allows firm to survive => this can be checked)

  44. Describe the specification [triple difference] Note: quite nicely, because of the two experiments, the identification strategy can be tested. If results were purely driven by changes in trends across the groups, the trends should have been increasing for a group, and then decreasing for a subset of this group. Further restrictions on estimated parameters are imposed by the fact that an increase and a decrease in capital should mirror each other (under some additional assumptions?)

  45. Bank lending and firm revenues went up (down) for the newly targeted (dismissed) firms in the year of the reform, relative to firms that were already included (remained included). No evidence of substitution of bank credit for borrowing from the market and no evidence that revenue growth was driven by firms that had fully substituted bank credit for market borrowing. Overidentification test: is the effect of credit the same in the two cases? We also use this data to estimate parameters of the production function: evidence is consistent with IRC Further results on the allocation of credit [not discussed here]

  46. Production of traded good is geographically concentrated (sometimes in locations, e.g., London) where costs are extremely high. Why? Agglomeration externalities could be one answer. E.g., input and labour market thickness advantages, direct productivity spillovers, etc. The issue is very important from an industrial policy point of view • There are two primary approaches in testing for spillovers: 1. tests for an unequal geographic distribution of firms: are firms spread unevenly? Are co-agglomeration rates higher between industries that are economically similar? The approach does not provide a direct measure of spillovers. 2. is a firm TFP higher when similar firms are located nearby?

  47. The challenge for both approaches is that firms base their location decisions on where their profits will be highest, and this could be due to spillovers, natural advantages, or other cost shifters. A causal estimate of the magnitude of spillovers requires a solution to this problem of identification GHM (2008) propose one. Take a very large firm, e.g., Toyota, deciding where to locate a huge new factory. To do that, Toyota chooses among a list of potential sites (i.e., counties). Typically, it will start with a very long list. Eventually, the list boils down to 2: the winner county and the runner up. Sure – the winner is not randomly chosen. But the runner up should give a much better control group than the average county. cfr. with Synthetic Control Methods [A. Abadie 2003/07]

  48. Example: how BMW picked the location for one of its plants? Worldwide competition considering 250 potential sites => announced in 1991 that list narrowed to 20 U.S. Candidates => 6 months later BMW announced two finalists (Greenville-Spartanburg, South Carolina, and Omaha, Nebraska) => 1992, BMW announced Greenville-Spartanburg won BMW received a package of incentives worth approximately $115 million funded by the state and local governments. Why did BMW choose Greenville-Spartanburg? 1. BMW’s expected future costs of production in GS: according to BMW, the characteristics that made GS were: low union density; supply of qualified workers; numerous global firms in the area, including 58 German companies; high quality transportation infrastructure, including air, rail, highway, and port, access to key local services. 2. subsidy [Not random at all!] => this could be a big issue, no?

  49. METHODOLOGICAL NOTE: information comes from a journal (Site Selection). General point here: there is often a lot (really a lot!) of information in business directories, specialized trade journals, etc. Often this information is in non-anonymous format (i.e., with names) and – though hard to collect – can potentially be matched with administrative records. THIS IS SOMETHING THAT DEVELOPMENT ECONOMISTS INTERESTED IN FIRMS AND INDUSTRIAL DEVELOPMENT OUGHT TO EXPLOIT MORE !

More Related