1 / 16

CIRANO Workshop: 5-6 Oct. 2007

Combining forecast densities from VARs with uncertain instabilities Anne Sofie Jore (Norges Bank) James Mitchell (NIESR, London) Shaun Vahey (Norges Bank, Melbourne Business School and RBNZ). CIRANO Workshop: 5-6 Oct. 2007. Forecast Combination.

ronia
Download Presentation

CIRANO Workshop: 5-6 Oct. 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Combining forecast densities from VARs with uncertain instabilitiesAnne Sofie Jore (Norges Bank)James Mitchell (NIESR, London)Shaun Vahey (Norges Bank, Melbourne Business School and RBNZ) CIRANO Workshop: 5-6 Oct. 2007

  2. Forecast Combination • Clark and McCracken (2007) find that simple averages of VAR forecasts improve RMSE • Averaging circumvents unknown structural breaks or “uncertain instabilities” • This finding is reassuring for decision makers with quadratic loss functions • For more general but unknown loss functions, the effectiveness of simple averages for VARs has not been studied • We generalize CM to study the forecast densities produced by averaging VAR densities

  3. The bottom-line • We find a substantial difference between the accuracy of the combined forecast densities before and after the US Great Moderation • In the evaluation period 1970-84 simple combinations perform reasonably well • But over 1985-2005 equal weight density combination performs poorly. In contrast CM showed that RMSEs from simple averages performed well • If greater weight is given to models that allow for the shifts in volatilities associated with the Great Moderation, predictive density accuracy improves considerably

  4. Illustratation

  5. Density forecast combination • The rest of the paper provides more formal evidence that evidence-based weights deliver better density forecasts than equal weights • To facilitate comparison with CM we use the same real-time data and estimate the same VAR models in output, prices and the short-term interest rate • The models include both full-sample and rolling window VARs, first-differenced VARs, de-trended VARs and AR benchmarks • First differencing, de-trending and rolling window models may offer some protection against structural breaks • Combining models with varying degrees of adaptability to structural breaks can help; see Pesaran and Timmermann (2007)

  6. Decision Maker’s Expert Combination • Following Morris (1974, 1977) and Winkler (1981) we adopt an “expert” combination approach • We define i=1,…,N experts, where each expert produces one of the N VARs (and univariate benchmarks) • A decision maker (DM) combines the densities from two or more experts based on the fit of the density forecasts over the Evaluation Period (DMEP) • The experts do not interact prior to combination by the DM • The DM learns about the predictive densities by combining prior information with the evidence presented by the experts in the DMEP using Bayes rule

  7. Linear opinion pool • The DM constructs the combined densities by a linear opinion pool method (see Mitchell & Hall, 2005) where are the forecast densities of expert i of variable using information set which given publication delays includes information dated and earlier • are a set of non-negative weights that sum to unity • The linear opinion pool delivers a more flexible distribution than each of the individual densities and accounts for model uncertainty

  8. Methods of choosing wi • Equal weights • The DM ignores the information in the data (the evaluation period) and simply assigns an equal prior weight to each expert in all periods: • We consider each of the pairwise combinations considered by CM (2007), and a combination taken across all experts • De-trending (exponential smoothing) was found to work particularly well – accounts for non-stationarities

  9. Methods of choosing wi (cont.) • Evidence-based weights • The DM combines the information yielded from the fit of the experts’ forecast densities with prior information • We assume the DM has non-informative priors over the experts • We use the log score (logS) to measure the fit of the experts’ densities through the DMEP • These predictive weights minimize the Kullback-Leibler Information Criterion (KLIC) ‘distance’ between the combined density forecast and the true but unknown density of

  10. Evaluation of forecast density combinations • For the last observation forecast the DM’s (post realization) weights across the N models captures the fit of each expert’s density • We also follow CM (2007) and evaluate the forecast densities over two evaluation periods: • 1970Q1-1984Q4 and 1985Q1-2005Q4 • Density forecasts are evaluated by testing whether the probability integral transforms of the forecast density with respect to the realization of the variable are uniform

  11. The real-time US data • We use the same real-time data as CM (2007) and like them run (stationary) VARs in output, inflation and the interest rate • We ignore any uncertainty associated with the trend • The raw data are from the Federal Reserve Bank of Philadelphia's Real-Time Data Set for Macroeconomists (Croushore & Stark, 2001) • All experts produce forecasts using the sequence of data vintages starting in 1965Q4 and ending in 2005Q4, with the raw data dating back to 1955 • To evaluate the forecasts the DM uses the second estimates as “final” data • hence DM combination using evidence-based weights introduces a two-quarter lag. Like CM we examine robustness to this definition

  12. GDP growth h=0Q: 1970-84

  13. GDP growth h=0Q: 1985-2005

  14. The DM’s post realization weights

  15. Summarizing the results • Whereas CM (2007) show that the RMSEs from simple averages can beat those from benchmark univariate models, we show that neither approach produces accurate forecast densities over the period spanning the Great Moderation • Evidence-based (EB) weights provide more accurate density forecasts by favoring models that allow for the shifts in volatilities associated with the Great Moderation • EB weights work by essentially delivering a flexible non-linear model

  16. Conclusion • We have re-visited the real-time application of CM (2007) but generalized to study density forecasts of GDP growth, inflation and the interest rate • Future work: • Add in DSGE models • Add in models with breaks • Explore the possibility of getting improved VAR forecasts using first release data (Corradi, Fernandez and Swanson, 2007) – work out predictive weights accounting for model uncertainty

More Related