1 / 50

Non-Seasonal Box-Jenkins Models

Non-Seasonal Box-Jenkins Models. Four-step iterative procedures. Model Identification Parameter Estimation Diagnostic Checking Forecasting. Step One: Model Identification. Model Identification. Stationarity Theoretical Autocorrelation Function (TAC)

gitano
Download Presentation

Non-Seasonal Box-Jenkins Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Non-Seasonal Box-Jenkins Models

  2. Four-step iterative procedures • Model Identification • Parameter Estimation • Diagnostic Checking • Forecasting

  3. Step One: Model Identification

  4. Model Identification • Stationarity • Theoretical Autocorrelation Function (TAC) • Theoretical Partial Autocorrelation Function (TPAC) • Sample Partial Autocorrelation Function (SPAC) • Sample Autocorrelation Function (SAC)

  5. Stationarity (I) • A sequence of jointly dependent random variables is called a time series

  6. Stationarity (II) • Stationary process Properties :

  7. Stationarity (III) • Example: The white noise series {et } • e’s are iid as N(0,se2). Note that

  8. Stationarity (IV) • Three basic Box-Jenkins models for a stationary time series {yt } : (1) Autoregressive model of order p (AR(p)) i.e., yt depends on its p previous values (2) Moving Average model of order q (MA(q)) i.e., yt depends on q previous random error terms

  9. Stationarity (V) • Three basic Box-Jenkins models for a stationary time series {yt } : (3) Autoregressive-moving average model of order p and q (ARMA(p,q)) i.e., yt depends on its p previous values and q previous random error terms

  10. AR(1) (I) • Simple AR(1) process without drift

  11. AR(1) (II) • Now, • Var(yt) and cov(yt, yt-s) are finite if and only if |f1| < 1, which is the stationarity requirement for an AR(1) process.

  12. AR(1) (IV) • Special Case: f1 = 1 • It is a “random walk” process. Now, • Thus,

  13. AR(1) (V) • Consider, • ytis a homogeneous non-stationary series. The number of times that the original series must be differenced before a stationary series results is called the order of integration.

  14. Theoretical Autocorrelation Function (TAC) (I) • Autoregressive (AR) Processes Consider an AR(1) process without drift : Recall that

  15. Theoretical Autocorrelation Function (TAC) (II) The autocorrelation function at lag k is So for a stationary AR(1) process, the TAC dies down gradually as k increases.

  16. Theoretical Autocorrelation Function (TAC) (III) Consider an AR(2) process without drift : The TAC functions are

  17. Theoretical Autocorrelation Function (TAC) (IV) Then the TAC dies down according to a mixture of damped exponentials and/or damped sine waves. • In general, the TAC of a stationary AR process dies down gradually as k increases.

  18. Theoretical Autocorrelation Function (TAC) (V) • Moving Average (MA) Processes Consider a MA(1) process without drift : Recall that

  19. Theoretical Autocorrelation Function (TAC) (VI) Therefore the TAC of the MA(1) process is The TAC of the MA(1) process “cuts off” after lag k=1.

  20. Theoretical Autocorrelation Function (TAC) (VII) Consider a MA(2) process : The TAC of a MA(2) process cuts off after 2 lags.

  21. Theoretical Partial Autocorrelation Function (TPAC) (I) • Autoregressive Processes By the definition of the PAC, the parameter fk is the kth PAC rkk. Therefore, the partial autocorrelation function at lag k is As mentioned before, if k=1, then That is, PAC=AC. The TPAC of an AR(1) process “cuts off” after lag 1.

  22. Theoretical Partial Autocorrelation Function (TPAC) (II) • Moving Average Processes Consider which is a stationary AR process with infinite order. Thus, the partial autocorrelation decays towards zero as j increases.

  23. Summary of the Behaviors of TAC and TPAC (I) Behaviors of TAC and TPAC for general non-seasonal models

  24. Summary of the Behaviors of TAC and TPAC (II) Behaviors of TAC and TPAC for specific non-seasonal models

  25. Summary of the Behaviors of TAC and TPAC (III) Behaviors of TAC and TPAC for specific non-seasonal models

  26. Summary of the Behaviors of TAC and TPAC (IV) Behaviors of TAC and TPAC for specific non-seasonal models

  27. Sample Autocorrelation Function (SAC) (I) • For the working series zb, zb+1, , zn, the sample autocorrelation at lag k is where

  28. Sample Autocorrelation Function (SAC) (II) • rk measures the linear relationship between time series observations separated by a lag of k time units • The Standard error of rk is • The trk statistic is

  29. Sample Autocorrelation Function (SAC) (III) • Behaviors of SAC (1) The SAC can cut off. A spike at lag k exists in the SAC if rk is statistically large. If Then rk is considered to be statistically large. The SAC cuts off after lag k if there are no spikes at lags greater than k in the SAC.

  30. Sample Autocorrelation Function (SAC) (IV) (2) The SAC dies down if this function does not cut off but rather decreases in a ‘steady fashion’. The SAC can die down in (i) a damped exponential fashion (ii) a damped sine-wave fashion (iii) a fashion dominated by either one of or a combination of both (i) and (ii). The SAC can die down fairly quickly or extremely slowly.

  31. Sample Autocorrelation Function (SAC) (V) • The time series values zb, zb+1, …, zn should be considered stationary, if the SAC of the time series values either cuts off fairly quickly or dies down fairly quickly. • However if the SAC of the time series values zb, zb+1, …, zn dies down extremely slowly, then the time series values should be considered non-stationary.

  32. Sample Partial Autocorrelation Function (SPAC) (I) • The sample partial autocorrelation at lag k is where for j = 1, 2, …, k-1.

  33. Sample Partial Autocorrelation Function (SPAC) (II) • rkk may intuitively be thought of as the sample autocorrelation of time series observations separated by a lag k time units with the effects of the intervening observations eliminated. • The standard error of rkk is • The trkk statistic is

  34. Sample Partial Autocorrelation Function (SPAC) (III) • Behaviors of SPAC similar to its of the SAC. The only difference is that rkk is considered to be statistically large if for any k.

  35. Sample Partial Autocorrelation Function (SPAC) (IV) • The behaviors of the SAC and the SPAC of a time series data help to tentatively identify a Box-Jenkins model. • Each Box-Jenkins model is characterized by its theoretical autocorrelation (TAC) function and its theoretical partial autocorrelation (TPAC) function.

  36. Step Two: Parameter Estimation

  37. Parameter Estimation • Given n observations y1, y2, …, yn, the likelihood function L is defined to be the probability of obtaining the data actually observed. • For non-seasonal Box-Jenkins models, L will be a function of the d, f’s, q’s and se2 given y1, y2, …, yn. • The maximum likelihood estimators (m.l.e.) are those value of the parameters for which the data actually observed are most likely, that is, the values that maximize the likelihood function L.

  38. Step Three: Diagnostic Checking

  39. Diagnostic Checking • Often it is not straightforward to determine a single model that most adequately represents the data generating process. The suggested tests include (1) residual analysis, (2) overfitting, (3) model selection criteria.

  40. Residual Analysis • If an ARMA(p,q) model is an adequate representation of the data generating process, then the residuals should be uncorrelated. • Use the Box-Pierce statistic • or the Ljung-Box-Pierce statistic

  41. Overfitting • If an ARMA(p,q) model is specified, them we could estimate an ARMA(p+1,q) or an ARMA(p,q+1) process. • Then we check the significance of the additional parameters (but be aware of multicollinearity problems),

  42. Model Selection Criteria • Akaike Information Criterion (AIC) AIC = -2 ln(L) + 2k • Schwartz Bayesian Criterion (SBC) SBC = -2 ln(L) + k ln(n) where L = likelihood function k = number of parameters to be estimated, n = number of observations. • Ideally, the AIC and SBC will be as small as possible

  43. Step Four: Forecasting

  44. Forecasting • Given a the stationary series zb, zb+1, , zt, we would like to forecast the value zt+l. = l-step-ahead forecast of zt+l made at time t, = l-step-ahead forecast error = • The l-step-ahead forecast is derived using the minimum mean square error forecast and is given by

  45. Forecasting with AR(1) model (I) • The AR(1) time series model is where et ~ N(0,se2). • 1-step-ahead point forecast

  46. Forecasting with AR(1) model (II) • Recall that et+1 is independent of zb, zb+1, …, zn and it has a zero mean. Thus, • The forecast error is • Then the variance of the forecast error is

  47. Forecasting with AR(1) model (III) • 2-step-ahead point forecast • The forecast error is • The forecast error variance is

  48. Forecasting with AR(1) model (IV) • l-step-ahead point forecast • The forecast error is • The forecast error variance is

  49. Forecasting with MA(1) model (I) • The MA(1) model is where et ~ N(0,se2). • l-step-ahead point forecast

  50. Forecasting with MA(1) model (II) • The forecast error is • The variance of forecast error is

More Related