1 / 23

MF-852 Financial Econometrics

MF-852 Financial Econometrics. Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003. Topics. Serial correlation What is it? Effect on hypothesis tests Testing and correcting for serial correlation Heteroscedasticity Ditto. ARCH (or how to win a Nobel prize).

skip
Download Presentation

MF-852 Financial Econometrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MF-852 Financial Econometrics Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003

  2. Topics • Serial correlation • What is it? • Effect on hypothesis tests • Testing and correcting for serial correlation • Heteroscedasticity • Ditto. • ARCH (or how to win a Nobel prize)

  3. Serial Correlation • The error terms in the regression should be independent, i.e., E(eiej) = 0 for any i and j. • If this assumption is not true then the errors are serially correlated. • Only a problem for time-series data.

  4. Serial Correlation — Possible Causes • Omitted variables. • Wrong functional form. • “inertia” in economic data—error term composed of many small effects, each with a similar trend.

  5. Correlated Error Terms • Suppose E(eiei-1)  0. Implies neighboring observations are correlated, not independent. • 1st-order process. Most common form of serial correlation. • Suppose E(eiei-4)  0. • 4th-order process. Often occurs with quarterly data.

  6. Graph of Residuals from a Regression

  7. Importance of Serial Correlation • Regression coefficients (the marginal effects) are unbiased. • BUT their standard errors are biased. • Bias generally understates the standard errors, so significance tests are biased against H0. • H0 is rejected too often.

  8. Bias in Standard Errors • Standard errors for the coefficients depend on estimated variance of error term, s2e. • Regression program assumes independent errors with mean 0, so program calculates

  9. Why the Standard Errors are Biased • Calculation ignores covariance when errors are NOT independent. • Covariance between errors, when it exists, is usually positive. • So s2e would be understated and standard errors would be biased downward.

  10. Testing for Serial Correlation • Most common test is Durbin-Watson statistic • Only used for 1st order serial correlation • Calculated as

  11. Durbin-Watson Stat. • When covariance between neighboring observations is zero then DW should be close to 2. High covariance implies DW —> 0. • H0 for no 1st order serial correlation: DW = 2 • Look up critical values in table (RR, p. 592) • See sample regression in xls file.

  12. Model with Serial Correlation • Yt = 0 + 1Xt + et • Suppose et = et-1 + ut , where ut is another error with mean 0 that is serially independent and uncorrelated with e or X. • -1 <  < 1 (or the process is explosive) • ut is called the innovation in e because it is the new component of e each period. • Serially correlated: E(etet-1) = var(et).

  13. How to find  • Estimate it as  = 1 – DW/2. • We can do this in Excel. • Fancier procedures: Cochrane-Orcutt and Hildreth-Liu and others. • A good regression program will calculate  automatically.

  14. Fixing Serial Correlation • Suppose  is known. Then “difference” the model: Yt – Yt-1 = 0(1–) + 1(Xt–Xt-1) + (et – et-1) Or Yt – Yt-1 = 0(1–) + 1(Xt–Xt-1) + ut • ut is a “well behaved” error. • Differenced model yields unbiased coefficients and unbiased standard errors. • See example.

  15. Heteroscedasticity • Strange name! Greek for “different variances.” • Violation of last assumption about residual: same variance for each error term. • Can occur with any kind of data.

  16. Heteroscedasticity — Possible Causes • Wrong functional form. • Var(e) correlated with an included X variable on the right side of the regression. • E(var(e), X)  0, NOT E(e, X)  0

  17. Heteroscedasticity — Importance • Regression coefficients (the marginal effects) are unbiased. • BUT their standard errors are biased. • Direction of bias not usually known. • Confidence levels, p-values, t statistics not reliable.

  18. Model with Heteroscedasticity • Yt = 0 + 1Xt + et • Suppose var(et) = 2Xt2. • Var(e) is different for each observation.

  19. Fixing Heteroscedasticity — Weighted Least Squares • Observations with smaller error variance are “better.” Give them more weight when estimating the model. • Weighted Least Squares (WLS): Multiply observations by weighting factors that equalize the variance. (1/Xt)Yt = (1/Xt)0 + (1/Xt)1Xt + (1/Xt)et • Var((1/Xt)et) = ((1/Xt2)2Xt2 = 2

  20. Calculating WLS • Suppose form of heteroscedasticity is known, e.g., need to weight by Xt. • You just need to create new variables. (1/Xt)Yt = (1/Xt)0 + 1 + ut • Intercept in WLS is 1, slope on 1/X is 0. • “Well behaved” error term, yields unbiased coefficients and unbiased standard errors.

  21. ARCH models • AutoRegressive Conditionally Heteroscedastic model • Regression model with serial correlation (“autoregressive”) AND heteroscedasticity. • Used to model volatility, i.e., variance, of returns.

  22. ARCH models • Sometimes you want to model volatility itself (e.g., it’s an input to an option pricing model). • Volatility can change over time, periods of high and low volatility. • ARCH describes this process.

  23. Formulation of ARCH model • Yt = 0 + 1Xt + et • Var(et) = 0 + 1et-1. • 1st order ARCH process. • Can estimate ’s and ’s and perform WLS.

More Related