1 / 11

Econ 427 lecture 6 slides

Econ 427 lecture 6 slides. Selecting forecasting models—alternative criteria. Forecast Model Selection. What are we trying to do? Find the model with the best likely forecast performance A practical approach:

cashc
Download Presentation

Econ 427 lecture 6 slides

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Econ 427 lecture 6 slides Selecting forecasting models—alternative criteria

  2. Forecast Model Selection • What are we trying to do? • Find the model with the best likely forecast performance • A practical approach: • Find the model with the smallest out-of-sample 1-step-ahead mean squared prediction error.

  3. EViews output Of course the problem is that often we only have data fromin-sample.

  4. Mean Squared error • One approach would be to pick the model that minimizes in-sample mean squared error • This is the same as minimizing the sum of square resids, right?

  5. R-squared • Also the same as maximizing R2 (R-squared) : • What is the problem with these approaches?

  6. Critique of MSE as a Model Selection Tool • In-sample overfitting and data mining • Technically, MSE is a (downward) biased estimator of out-of-sample 1-step-ahead prediction error variance. • The bias increases as you add variables. • How to “fix” this problem? • Penalize for degrees of freedom used up in estimation (number of included variables)

  7. Some alternatives that do this • Adjusted R-squared

  8. Some alternatives that do this • Akaike Information Criteria: • Schwatz Information Criteria:

  9. How do penalize degrees of fr?

  10. Evaluating Model Selection Criteria • Consistency • When the true model is one of the ones evaluated, the probability of selecting that one approaches 1 as sample size becomes large. • When the true model is NOT one of the ones evaluated, the probability of selecting the best approximation among candidate models approaches 1 as sample size becomes large. • Are any of these consistent?

  11. Evaluating Model Selection Criteria • Asymptotic efficiency • Chooses a sequence of models as the sample size becomes large whose 1-step-ahead forecast error variances approach the true one at least as fast as any other selection criterion. • Do any of our candidate criteria meet that? • What to do in practice?

More Related