1 / 16

Lecture 23: More autoregression

Lecture 23: More autoregression. April 14, 2014. Question. Which of the following is the best answer: The Durbin-Watson statistic should not be relied upon to evaluate regression models that use lagged values of the response (Y) as explanatory variables

zudora
Download Presentation

Lecture 23: More autoregression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 23:More autoregression April 14, 2014

  2. Question Which of the following is the best answer: • The Durbin-Watson statistic should not be relied upon to evaluate regression models that use lagged values of the response (Y) as explanatory variables • With a high enough order, a polynomial trend model will obtain a large R2 for any time series. • We can only forecast an AR(1) model for one period because to forecast Yt+2 we would need to know Yt+1, which do not know. • None of the above. • More than one of the above

  3. Question • The Durbin-Watson statistic should not be relied upon to evaluate regression models that use lagged values of the response (Y) as explanatory variables • True: You can use it to approximate the first-order autocorrelation but it wouldn’t make sense to use it in an autoregressive model. • With a high enough order, a polynomial trend model will obtain a large R2 for any time series. • True: a great example of obtaining a really high R2 but will probably make really poor forecasts. Don’t just look at R2 • We can only forecast an AR(1) model for one period because to forecast Yt+2 we would need to know Yt+1, which do not know. • False: you can predict substitute your forecasted value in to forecast Yt+2. Note this is really the reason forecast uncertainty blows up as you forecast further into the future.

  4. Administrative • Quiz 5 • How was it? • Exam 3 a week from today • Will cover all forecasting methods through Wednesday

  5. Exam 2: Results • Median ≈ Mean ≈ 70 • Sd ≈13.7

  6. Last time • Higher order AR models • First (and higher) order differences • Transformation of the data • Hence you’ll need to transform it back to make it usable for a forecast.

  7. Question Using the cpi.xls dataset forecast CPI in December 2009 using a AR(1) model of the change in CPI (ΔCPI) • 217.31 • 217.58 • 0.42 • 0.46

  8. Forecasting methods Many types, but the two we’re addressing in the course: • Moving Average (MA) methods • Regression (AR) methods • It’s NOT that MA methods are forecasting and Regression methods are regression. They’re two ways to forecast. • If the course were a little longer, we’d cover ARMA and ARIMA methods – the combination of MA and AR methods.

  9. Stationarity • Time series regression models assume the process is stationary. • (“Strong” stationarity) For any stochastic process • “Weak” stationarity: means/variances/covariances don’t change over time.

  10. Stationarity Slight problem: most things that we’ll want to forecast aren’t stationary… • Two main types of non-stationarity: • Trend • Deterministic • Nonrandom function of time. E.g., inflation increases by 0.1% per quarter • Stochastic • Varies over time. E.g., inflation may exhibit substantial increases over time followed by an decrease, … then an increase…. • Breaks • When the process changes/shifts as a point in time (often due to changes in policy, bubbles, etc). • We don’t have time to deal with this one…

  11. Stationarity How do we deal with it? • Often we’ll try to transform the data or model to make it stationary in two main ways: • Trend-stationary: de-trend (and/or de-seasonalize) the data by removing the trend component (fitting a trend line and subtracting it out prior to fitting a model) orby including time as an independent variable • Difference-stationary: transform the data by looking at the period to period differences (first differences). • How do you tell if you should do (1) or (2)? • We won’t do much of it. One way is the Unit-Root test • Both are common, but many things that you’ll be interested in (stock prices, etc) often exhibit (2) – so differences become very useful.

  12. AR(p) models Recall the AR(p) regression model: Note that this still is only using the response data. • The data are sometimes called indistinguishable because the only thing we know about them is the temporal order of the Y • Most data that we might be interested in is distinguishable • We know how rate of Economic Growth in time t (and t-1, etc), but we also know the Inflation Rate at time t (and t-1, etc). • We have additional predictors that we could use to make a forecast of Yt.

  13. ADL models Most of the time we have a good theoretical reason to believe something other than Yt-1 might influence the value of Yt. ADL = Autoregressive Distributed Lag • Basically it’s the multiple regression version of AR models. • Now we can include other explanatory variables (X1, X2, etc.)

  14. ADL models Suppose you sell Land Cruisers (LCs) and want to determine the relationship between the number of inquiries received and the number of LCs sold in a given week. • Assume you record the number of inquiries you receive every week over the past year. • There’s probably some lag between the inquiry and whether someone buys the LC. Large purchases might take more than a week to inquire and then decide. • Hence we need to add a lag of the inquiry to a model of estimated LC sales.

  15. ADL models • Hence the ADL allows lags in all the variables. • Autoregressive: allow lags of the Response variable • Distributed Lag: because it allows multiple lags (different explanatory variables)

  16. ADL Assumptions • Essentially they’re the same as the MRM except • In the ADL stationarity in Y and all Xs (and all lags) is required. • Also essentially requires that autocorrelation goes to zero with sufficiently large time periods. • You can still fit an ADL if you violate these assumptions. As you can fit an MRM even if you violate the assumptions of the MRM. • But realize you’re violating the assumptions and use caution using the results.

More Related