Time Series Analysis and Examples |
Getting Started
The measurement (or observation) equation can be written
![{y}_t = {b}_t + {h}_t {z}_t + \epsilon_t](images/timeseriesexpls_timeseriesexplseq389.gif)
where
![{b}_t](images/timeseriesexpls_timeseriesexplseq390.gif)
is an
![n_y x 1](images/timeseriesexpls_timeseriesexplseq391.gif)
vector,
![{h}_t](images/timeseriesexpls_timeseriesexplseq367.gif)
is an
![n_y x n_z](images/timeseriesexpls_timeseriesexplseq392.gif)
matrix, the sequence of observation noise
![\epsilon_t](images/timeseriesexpls_timeseriesexplseq104.gif)
is independent,
![{z}_t](images/timeseriesexpls_timeseriesexplseq393.gif)
is an
![n_z x 1](images/timeseriesexpls_timeseriesexplseq394.gif)
state vector, and
![{y}_t](images/timeseriesexpls_timeseriesexplseq395.gif)
is an
![n_y x 1](images/timeseriesexpls_timeseriesexplseq391.gif)
observed vector.
The transition (or state) equation is denoted as
a first-order Markov process of the state vector.
![{z}_{t+1} = {a}_t + {f}_t {z}_t + \eta_t](images/timeseriesexpls_timeseriesexplseq396.gif)
where
![{a}_t](images/timeseriesexpls_timeseriesexplseq397.gif)
is an
![n_z x 1](images/timeseriesexpls_timeseriesexplseq394.gif)
vector,
![{f}_t](images/timeseriesexpls_timeseriesexplseq398.gif)
is an
![n_z x n_z](images/timeseriesexpls_timeseriesexplseq399.gif)
transition matrix, and the
sequence of transition noise
![\eta_t](images/timeseriesexpls_timeseriesexplseq400.gif)
is independent.
This equation is often called a
shifted transition equation
because the state vector is shifted forward one time period.
The transition equation can also be
denoted by using an alternative specification
![{z}_t = {a}_t + {f}_t {z}_{t-1} + \eta_t](images/timeseriesexpls_timeseriesexplseq401.gif)
There is no real difference between the shifted transition
equation and this alternative equation if the observation
noise and transition equation noise are uncorrelated -
that is,
![e(\eta_t \epsilon^'_t) = 0](images/timeseriesexpls_timeseriesexplseq402.gif)
.
It is assumed that
![e(\eta_t \eta^'_s) & = & {v}_t \delta_{ts} \ e(\epsilon_t \epsilon^'_s) & = & {{r}}_t \delta_{ts} \ e(\eta_t \epsilon^'_s) & = & {g}_t \delta_{ts} \](images/timeseriesexpls_timeseriesexplseq403.gif)
where
![\delta_{ts} = \{ 1 & { if } t = s \ 0 & { if } t \neq s .](images/timeseriesexpls_timeseriesexplseq404.gif)
De Jong (1991a) proposed a diffuse Kalman filter that can
handle an arbitrarily large initial state covariance matrix.
The diffuse initial state assumption is
reasonable if you encounter the case of
parameter uncertainty or SSM nonstationarity.
The SSM of the diffuse Kalman filter is written
![{y}_t & = & {x}_t \beta + {h}_t {z}_t + \epsilon_t \ {z}_{t+1} & = & {w}_t \bet... ...f}_t {z}_t + \eta_t \ {z}_0 & = & {a}+ {a}\delta \ \beta & = & {b}+ {b}\delta](images/timeseriesexpls_timeseriesexplseq405.gif)
where
![\delta](images/timeseriesexpls_timeseriesexplseq406.gif)
is a random variable with a mean
of
![\mu](images/timeseriesexpls_timeseriesexplseq407.gif)
and a variance of
![\sigma^2\sigma](images/timeseriesexpls_timeseriesexplseq408.gif)
.
When
![\sigma arrow \infty](images/timeseriesexpls_timeseriesexplseq409.gif)
, the SSM is said to be diffuse.
The KALCVF call computes the one-step prediction
and the filtered estimate
,
together with their covariance matrices
and
, using forward recursions.
You can obtain the
-step prediction
and
its covariance matrix
with the KALCVF call.
The KALCVS call uses backward recursions to compute the smoothed
estimate
and its covariance matrix
when there are
observations in the complete data.
The KALDFF call produces one-step prediction
of the state and the unobserved random vector
as well as their covariance matrices.
The KALDFS call computes the smoothed estimate
and its covariance matrix
.
Copyright © 2009 by SAS Institute Inc., Cary, NC, USA. All rights reserved.