Getting Started

The measurement (or observation) equation can be written

\[  \textbf{y}_ t = \textbf{b}_ t + \bH _ t \textbf{z}_ t + \epsilon _ t  \]

where $\textbf{b}_ t$ is an $N_ y \times 1$ vector, $\bH _ t$ is an $N_ y \times N_ z$ matrix, the sequence of observation noise $\epsilon _ t$ is independent, $\textbf{z}_ t$ is an $N_ z \times 1$ state vector, and $\textbf{y}_ t$ is an $N_ y \times 1$ observed vector.

The transition (or state) equation is denoted as a first-order Markov process of the state vector.

\[  \textbf{z}_{t+1} = \textbf{a}_ t + \bF _ t \textbf{z}_ t + \eta _ t  \]

where $\textbf{a}_ t$ is an $N_ z \times 1$ vector, $\bF _ t$ is an $N_ z \times N_ z$ transition matrix, and the sequence of transition noise $\eta _ t$ is independent. This equation is often called a shifted transition equation because the state vector is shifted forward one time period. The transition equation can also be denoted by using an alternative specification

\[  \textbf{z}_ t = \textbf{a}_ t + \bF _ t \textbf{z}_{t-1} + \eta _ t  \]

There is no real difference between the shifted transition equation and this alternative equation if the observation noise and transition equation noise are uncorrelated—that is, $E(\eta _ t \epsilon ^{\prime }_ t) = 0$. It is assumed that

\begin{eqnarray*}  E(\eta _ t \eta ^{\prime }_ s) &  = &  \bV _ t \delta _{ts} \\ E(\epsilon _ t \epsilon ^{\prime }_ s) &  = &  \bR _ t \delta _{ts} \\ E(\eta _ t \epsilon ^{\prime }_ s) &  = &  \bG _ t \delta _{ts} \\ \end{eqnarray*}

where

\[  \delta _{ts} = \left\{  \begin{array}{ll} 1 &  \mbox{ if } t = s \\ 0 &  \mbox{ if } t \neq s \end{array} \right.  \]

de Jong (1991) proposed a diffuse Kalman filter that can handle an arbitrarily large initial state covariance matrix. The diffuse initial state assumption is reasonable if you encounter the case of parameter uncertainty or SSM nonstationarity. The SSM of the diffuse Kalman filter is written

\begin{eqnarray*}  \textbf{y}_ t &  = &  \bX _ t \beta + \bH _ t \textbf{z}_ t + \epsilon _ t \\ \textbf{z}_{t+1} &  = &  \bW _ t \beta + \bF _ t \textbf{z}_ t + \eta _ t \\ \textbf{z}_0 &  = &  \textbf{a} + \bA \delta \\ \beta &  = &  \textbf{b} + \bB \delta \end{eqnarray*}

where $\delta $ is a random variable with a mean of $\mu $ and a variance of $\sigma ^2\Sigma $. When $\Sigma \rightarrow \infty $, the SSM is said to be diffuse.

The KALCVF call computes the one-step prediction $\textbf{z}_{t+1|t}$ and the filtered estimate $\textbf{z}_{t|t}$, together with their covariance matrices $\bP _{t+1|t}$ and $\bP _{t|t}$, using forward recursions. You can obtain the $k$-step prediction $\textbf{z}_{t+k|t}$ and its covariance matrix $\bP _{t+k|t}$ with the KALCVF call. The KALCVS call uses backward recursions to compute the smoothed estimate $\textbf{z}_{t|T}$ and its covariance matrix $\bP _{t|T}$ when there are $T$ observations in the complete data.

The KALDFF call produces one-step prediction of the state and the unobserved random vector $\delta $ as well as their covariance matrices. The KALDFS call computes the smoothed estimate $\textbf{z}_{t|T}$ and its covariance matrix $\bP _{t|T}$.