state space models
Download
Skip this Video
Download Presentation
State Space Models

Loading in 2 Seconds...

play fullscreen
1 / 47

State Space Models - PowerPoint PPT Presentation


  • 121 Views
  • Uploaded on

State Space Models. Let { x t : t  T } and { y t : t  T } denote two vector valued time series that satisfy the system of equations:. y t = A t x t + v t (The observation equation) x t = B t x t- 1 + u t (The state equation).

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' State Space Models' - kim-johns


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide2
Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations:

yt = Atxt+ vt (The observation equation)

xt = Btxt-1+ ut (The state equation)

The time series { yt:t T} is said to have state-space representation.

slide3

Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying:

  • E(ut) = E(vt) = 0.
  • E(utusˊ) = E(vtvsˊ) = 0 if t ≠ s.
  • E(ututˊ) = Suand E(vtvtˊ) = Sv.
  • E(utvsˊ) = E(vtusˊ) = 0 for all t and s.
slide4
Example: One might be tracking an object with several radar stations. The process {xt:t T} gives the position of the object at time t. The process { yt:t  T} denotes the observations at time t made by the several radar stations.

As in the Hidden Markov Model we will be interested in determining position of the object, {xt:t T}, from the observations, {yt:t T} , made by the several radar stations

slide5
Example: Many of the models we have considered to date can be thought of a State-Space models

Autoregressive model of order p:

slide6
Define

Then

Observation equation

and

State equation

slide7
Hidden Markov Model: Assume that there are m states. Also that there the observations Yt are discreet and take on n possible values.

Suppose that the m states are denoted by the vectors:

slide9

Let

and

Note

slide10

Let

So that

The State Equation

with

slide11

Also

Hence

and

where diag(v) = the diagonal matrix with the components of the vector v along the diagonal

slide12

Since

then

and

Thus

slide14

Then

The Observation Equation

with

and

slide15

Hence with these definitions the state sequence of a Hidden Markov Model satisfies:

The State Equation

with

and

The observation sequence satisfies:

The Observation Equation

with

and

slide17
We are now interested in determining the state vector xt in terms of some or all of the observation vectors y1, y2, y3, … , yT.

We will consider finding the “best” linear predictor.

We can include a constant term if in addition one of the observations (y0 say) is the vector of 1’s.

We will consider estimation of xt in terms of

  • y1, y2, y3, … , yt-1(the prediction problem)
  • y1, y2, y3, … , yt (the filtering problem)
  • y1, y2, y3, … , yT (t < T, the smoothing problem)
slide18
For any vector x define:

where

is the best linear predictor of x(i), the ith component of x, based on y0, y1, y2, … , ys.

The best linear predictor of x(i) is the linear function that of x, based on y0, y1, y2, … , ys that minimizes

slide19
Remark: The best predictor is the unique vector of the form:

Where C0, C1, C2, … ,Cs, are selected so that:

slide21

Remark

Let u and v, be two random vectors than

is the optimal linear predictor of u based on v if

slide23
Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations:

yt = Atxt+ vt (The observation equation)

xt = Btxt-1+ ut (The state equation)

The time series { yt:t T} is said to have state-space representation.

slide24

Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying:

  • E(ut) = E(vt) = 0.
  • E(utusˊ) = E(vtvsˊ) = 0 if t ≠ s.
  • E(ututˊ) = Suand E(vtvtˊ) = Sv.
  • E(utvsˊ) = E(vtusˊ) = 0 for all t and s.
slide25

Kalman Filtering:

Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations:

yt = Atxt+ vt

xt = Bxt-1+ ut

Let

and

slide26

Then

where

One also assumes that the initial vector x0 has mean mand covariance matrix S an that

slide29
Proof:

Now

hence

proving (4)

Note

slide30

Let

Let

Given y0, y1, y2, … , yt-1 the best linear predictor of dt using et is:

slide31

Hence

(5)

where

and

Now

slide32

Also

hence

(2)

slide33

Thus

(4)

(5)

where

(2)

Also

slide34

Hence

(3)

The proof that

(1)

will be left as an exercise.

slide35

Example:

Suppose we have an AR(2) time series

What is observe is the time series

{ut|t  T} and {vt|t  T} are white noise time series with standard deviations suand sv.

slide37

The equation:

can be written

Note:

slide45

Kalman Filtering (smoothing):

Now consider finding

These can be found by successive backward recursions for t = T, T – 1, … , 2, 1

where

slide47

The backward recursions

2.

1.

3.

In the example:

- calculated in forward recursion

ad