1 / 18

Algebra U = ∑ a(t) Y(t) E{U} = c Y ∑ a(t) cov{U,V} = ∑ a(s) b(t) c YY (s-t)

Algebra U = ∑ a(t) Y(t) E{U} = c Y ∑ a(t) cov{U,V} = ∑ a(s) b(t) c YY (s-t) U is gaussian if {Y(t)} gaussian. Some useful stochastic models. Purely random / white noise (i.i.d.) (often mean assumed 0) c YY (u) = cov(Y(t+u),Y(t)} = σ Y 2 if u = 0

bela
Download Presentation

Algebra U = ∑ a(t) Y(t) E{U} = c Y ∑ a(t) cov{U,V} = ∑ a(s) b(t) c YY (s-t)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Algebra U = ∑ a(t) Y(t) E{U} = c Y ∑ a(t) cov{U,V} = ∑ a(s) b(t) c YY(s-t) U is gaussian if {Y(t)} gaussian

  2. Some useful stochastic models Purely random / white noise (i.i.d.) (often mean assumed 0) cYY(u) = cov(Y(t+u),Y(t)} = σY2 if u = 0 = 0 if u ≠ 0 ρYY(u) = 1, u=0 = 0, u ≠ 0 A building block

  3. Random walk Y(t) = Y(t-1) + Z(t), Y(0) = 0 Y(t) = ∑i=1t Z(i) E{Y(t)} = t μZ var{Y(t)} = t σZ2 Not stationary, but ∆Y(t) = Y(t) – Y(t-1) = Z(t)

  4. Moving average, MA(q) Y(t) = β(0)Z(t) + β(1)Z(t-1) +…+ β(q)Z(t-q) If E{Z(t)} = 0, E{Y(t)} = 0 cYY(u) = 0, u > q = σZ2 ∑ t=0q-kβ(t) β(t+u) u=0,1,…,q = cYY(-u) stationary MA(1). ρYY(u) = 1 u = 0 = β(1)/(1+ β(1) 2), k = ±1 = 0 otherwise

  5. Backward shift operator remember translation operator TuY(t)=Y(t+u) BjY(t) = Y(t-j) Linear process. Need convergence condition, e.g. |i | or |i |2 < 

  6. autoregressive process, AR(p) first-order, AR(1) Markov (**) Linear process invertible For convergence in probability/stationarity

  7. a.c.f. of ar(1) from previous slide (**) ρYY p.a.c.f. using normal or linear definitions corr{Y(t),Y(t-m)|Y(t-1),...,Y(t-m+1)} = 0 for m  p when Y is AR(p) Proof. via multiple regression

  8. In general case, Useful for prediction

  9. ρYY Yule-Walker equations for AR(p). Sometimes used for estimation Correlate, with Xt-k, each side of

  10. ARMA(p,q) (B)Yt = (B)Zt

  11. ARIMA(p,d,q). Xt = Xt - Xt-1 2Xt = Xt - 2Xt-1 + Xt-2 arima.mle() fits by mle assuming Gaussian noise

  12. Armax. (B)Yt = β(B)Xt + (B)Zt arima.mle(…,xreg,…) State space. st = Ft(st-1 , zt ) Yt = Ht(st , Zt ) could include X

  13. Next i.i.d. → mixing stationary process Mixing has a variety of definitions e.g. normal case, ∑ |cYY(u)| < ∞, e.g.Cryer and Chan (2008) CLT mY = cYT = Y-bar = ∑ t=1T Y(t)/T Normal with E{mY} = cY var{mY} = ∑ s=1T ∑ t=1T c YY(s-t) ≈ T ∑ u c YY(u) = T σYY if white noise

  14. OLS. Y(t) = α + βt + N(t) b = β + ∑ (t - tbar)N(t) /∑ (t - tbar)2 = β + ∑ u(t) N(t) E(b) = β Var(b) = ∑ ∑ us ut cNN(s-t)

  15. Cumulants. cum(Y1,Y2, ...,Yk ) Extends mean, variance, covariance cum(Y) = E{Y} cum(Y,Y) = Var{Y} cum(X,Y) = Cov(X,Y) DRB (1975)

  16. Proof of ordinary CLT. ST = Y(1) + … + Y(T) cumk(ST) = T κk additivity and imdependence cumk(ST/√T) = T–k/2 cumk(ST) = O( T T–k/2 ) → 0 for k > 2 as T → ∞ normal cumulants of order > 2 are 0 normal is determined by its moments (ST - Tμ)/√ T tends in distribution to N(0,σ2)

  17. Stationary series cumulant functions. cum{Y(t+u1 ), …,Y(t+u k-1 ),Y(t) } = ck(t+u 1 , … ,t+u k-1 ,t) = ck(u1 , .., uk-1) k = 2, 3,, 4 ,… cumulant mixing. ∑ u |ck(u1 , ..,uk-1)| < ∞ u = (u1 , .., uk-1)

More Related