1 / 20

Lecture 11: Events and Performance Evaluations

Lecture 11: Events and Performance Evaluations. The following topics will be covered: Event Study in general Short run abnormal return Long run performance Also, review materials regarding hypothesis testing. Event Study in General. Defining an event Estimating normal returns

emelda
Download Presentation

Lecture 11: Events and Performance Evaluations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 11: Events and Performance Evaluations • The following topics will be covered: • Event Study in general • Short run abnormal return • Long run performance • Also, review materials regarding hypothesis testing L11: Events and Performance Evaluations

  2. Event Study in General • Defining an event • Estimating normal returns • Constant-mean-return model • Market model • Mean-adjusted-return model • Economic model • Computing abnormal returns • See page 150-156, Chapter 4, CLM • WRDS Eventus well handle event studies • http://web.mit.edu/doncram/www/eventstudy.html L11: Events and Performance Evaluations

  3. Test Specification • The size of a test -- the probability of a type I error • Type I error – the procedure may lead to rejection of the null hypothesis when it is true. • In terms of event study, this refers to “discover” an event when actually there is no event. • The power of a test: the probability that it will lead to rejection of a false null hypothesis • Type II error – the procedure may fail to reject the null hypothesis when it is false. • Discover an event when there is an event • The test is then referred to as a well-specified test. • A test with low power has a high type II error. L11: Events and Performance Evaluations

  4. Brown and Warner, 1980 • Examine test specifications of various models used in event studies applying monthly data • Generating hypothetical “events” • Construct 250 samples, each containing 50 securities, which are selected at random and with replace from CRSP data • Abnormal performance is artificially added to a given sample • Adding 0% means no event (no abnormal return is added) • Alternative methods (w/ and w/o risk adjustment): • Mean adjusted return • Market adjusted returns • Market model residuals • Fama-MacBeth residuals (two-step estimation procedure) • Control portfolio L11: Events and Performance Evaluations

  5. Brown and Warner, 1980 (Cont’d) • t-test, sign test, Wilcoxon signed rank test are applied • The effect of clustering • If the model disturbances (i.e., abnormal returns) are correlated, clustering increases the variance of the performance measures and lower the power of the test. • Event date clustering; security risk clustering; • Inducing clustering in event-dates of sample securities – (a) “events” are in the same event month; (b) high or low beta stocks • Test procedure: (1) assuming no clustering, (2) • General findings • VW index often cause false rejection without increasing the power of the tests • Market model performs well • Risk adjustment does not add much L11: Events and Performance Evaluations

  6. Brown and Warner, 1985 • Examine the effectiveness of various event study methodologies for daily returns • Issues with daily returns • Non-normality (fat-tailed) • Non-synchronous trading causes biased beta estimation using daily data (Scholes and Williams, 1977; Dimson, 1979) • Shares traded infrequently have downward biased beta estimates • Shares traded frequently have upward biased beta estimates • Daily returns are serially and cross-sectionally correlated • Specify a variance-autocovariance matrix as on page 29 • Event induced variance • Boehmer, Masumeci and Poulsen (1991) L11: Events and Performance Evaluations

  7. Brown and Warner, 1985 (Cont’d) • Similar simulation procedure as Warner (1980) is applied • General findings: • Standard methodologies are well specified • Recognizing autocorrelation in daily excess returns and changes in variance is advantageous • Tests ignoring cross-sectional dependence is advantageous L11: Events and Performance Evaluations

  8. Inferences with Clustering • Event Date clustering • Industry clustering • Portfolio approach • The abnormal returns are aggregated into a portfolio dated using event time • Multivariate regression with dummy variables for the event date • See Schipper and Thompson (1983, 1985) • Not much advantage L11: Events and Performance Evaluations

  9. Short-run Vs. Long-run Event Studies • Methodological issues are less important for short-run event studies, but an appropriate test is critical to long run analysis because any small error in methodology would be amplified in long run studies. • Short run studies don’t involve much choice in benchmarks. However, long run performance analysis highly involve this consideration (Fama 1998). • Recall what we have in short-run event studies and what we use in long-run event studies • Difficult for the condition “everything else being equal” to hold L11: Events and Performance Evaluations

  10. Main Issues • Calculation of abnormal returns - CAR – implicitly assume investors take high turnover strategy. They hold stocks in monthly even daily intervals. - BHAR – buy-and-hold investors. - Which assumption is more appropriate? - Fama (1998) - Barber and Lyon (1997) • Evaluation of long-run performance (benchmarks) - Reference Portfolio Approach (e.g., EW market index, FF 3 factors) - Matching Portfolio Approach (Size, B/M) - Control Firm Approach L11: Events and Performance Evaluations

  11. Calculating Long-run Returns L11: Events and Performance Evaluations

  12. CAR vs. BHAR • CAR does not consider compounding • CAR uses arithmetic rather than geometric average • Barber and Lyon (1997) illustrates the difference - For BHAR close to zero (and below), CAR is higher - For large BHAR, CAR is much lower - Conceptually at least, BHAR looks better • BHAR can give false impressions of the speed of price adjustment to an event because BHAR can grow with the return horizon even when there is no abnormal return after the first period. (Mitchell and Stafford, 1997) • CARs pose fewer statistical problems than BHARs. CAR should be used, rather than BHAR (Fama, 1998) L11: Events and Performance Evaluations

  13. Biases • New listing bias - Arise because firms that constitute the index (or reference portfolio) include new firms that begin trading subsequent to the event month. • Rebalancing bias - Arise because compound returns of a reference portfolio, such as an equally weighted market index, are typically calculated assuming periodic (generally monthly) rebalancing, while returns of sample firms are compounded without rebalancing. • Skewness bias - Arise because long-run abnormal returns are positively skewed L11: Events and Performance Evaluations

  14. Empirical Power and Specification of Test Statistics • Abnormal returns calculated using reference portfolios yield test statistics that are mis-specified (empirical rejection rates exceed theoretical rejection rates) • Matching sample firms to control firms of similar sizes and book-to-market ratios yields well-specified test statistics • Adapted from Barber and Lyon (1997) L11: Events and Performance Evaluations

  15. Effect of Small firm, B/M and Leverage • Size and B/M effect: Brav, Geczy, and Gompers, JFE, 2000 • Fama (1998) – IPO underperformance are concentrated in “tiny” firms • Carefully matching IPO firms with BM and size benchmark groups • Adjust Fama-French 3-factors by excluding firms going public in the last five years • They also find IPO underperformance co-varies with factor returns constructured from nonissuing firms (page 234, Table 7) • Leverage effect: Eckbo, Masulis and Norli, JFE, 2000. • Issuing SEO lowers leverage, thus risk • Appropriately controlling for risk would eliminate such underperformance L11: Events and Performance Evaluations

  16. Calendar-Time versus Event Time Analysis • Intensively discussed in Fama (1998), Mitchell and Stafford (1999), Brav, Geczy and Gompers (2000) • Referring to abnormal return calculation based on the reference portfolio approach • Event-time portfolio approach computes abnormal returns using alphas. • Calendar-time portfolio calculates abnormal returns using (alphas + errors) • Reduce the effect of mispricing (page 236-237) L11: Events and Performance Evaluations

  17. Review: Hypothesis Testing • Used in CAMP tests • H0: c(θ)=0 • Likelihood Ratio Test • The intuition is that if the restriction (the null hypothesis) is valid, imposing it should not lead to a large reduction in the log-likelihood function. Let • Under the null, the large sample distribution of -2lnλ is chi-squared, with degrees of freedom equal to the number of restrictions imposed. L11: Events and Performance Evaluations

  18. Wald Test • A shortcoming of the likelihood ratio test is that it requires estimation of both the restricted and unrestricted parameter vectors • Let θ be the vector of parameter estimates obtained without restrictions. We hypothesize a set of restrictions: c(θ)=q • Wald statistic is • Under the null, W has a chi-squared distribution with degrees of freedom equal to the number of restrictions. L11: Events and Performance Evaluations

  19. Lagrange Multiplier Test Under the null, LM has a limiting chi-squared distribution with degrees of freedom equal to the number of restrictions. See page 159-160, Greene 2000, for the example. L11: Events and Performance Evaluations

  20. Exercises • 4.? CLM • Estimate MOM and MLE using SAS, will get the question done L11: Events and Performance Evaluations

More Related