Time Series Analysis and Examples


Getting Started

The measurement (or observation) equation can be written as

\[ \mb{y}_ t = \mb{b}_ t + \bH _ t \mb{z}_ t + \epsilon _ t \]

where $\mb{b}_ t$ is an $N_ y \times 1$ vector, $\bH _ t$ is an $N_ y \times N_ z$ matrix, the sequence of observation noise $\epsilon _ t$ is independent, $\mb{z}_ t$ is an $N_ z \times 1$ state vector, and $\mb{y}_ t$ is an $N_ y \times 1$ observed vector.

The transition (or state) equation is denoted as a first-order Markov process of the state vector,

\[ \mb{z}_{t+1} = \mb{a}_ t + \bF _ t \mb{z}_ t + \eta _ t \]

where $\mb{a}_ t$ is an $N_ z \times 1$ vector, $\bF _ t$ is an $N_ z \times N_ z$ transition matrix, and the sequence of transition noise $\eta _ t$ is independent. This equation is often called a shifted transition equation, because the state vector is shifted forward one time period. The transition equation can also be denoted by using an alternative specification,

\[ \mb{z}_ t = \mb{a}_ t + \bF _ t \mb{z}_{t-1} + \eta _ t \]

There is no real difference between the shifted transition equation and this alternative equation if the observation noise and transition equation noise are uncorrelatedā€”that is, $E(\eta _ t \epsilon ^{\prime }_ t) = 0$. It is assumed that

\begin{eqnarray*} E(\eta _ t \eta ^{\prime }_ s) & = & \bV _ t \delta _{ts} \\ E(\epsilon _ t \epsilon ^{\prime }_ s) & = & \bR _ t \delta _{ts} \\ E(\eta _ t \epsilon ^{\prime }_ s) & = & \bG _ t \delta _{ts} \end{eqnarray*}

where

\[ \delta _{ts} = \left\{ \begin{array}{ll} 1 & \mbox{ if } t = s \\ 0 & \mbox{ if } t \neq s \end{array} \right. \]

DeĀ Jong (1991) proposed a diffuse Kalman filter that can handle an arbitrarily large initial state covariance matrix. The diffuse initial state assumption is reasonable if you encounter the case of parameter uncertainty or SSM nonstationarity. The SSM of the diffuse Kalman filter is written as

\begin{eqnarray*} \mb{y}_ t & = & \bX _ t \beta + \bH _ t \mb{z}_ t + \epsilon _ t \\ \mb{z}_{t+1} & = & \bW _ t \beta + \bF _ t \mb{z}_ t + \eta _ t \\ \mb{z}_0 & = & \mb{a} + \bA \delta \\ \beta & = & \mb{b} + \bB \delta \end{eqnarray*}

where $\delta $ is a random variable with a mean of $\mu $ and a variance of $\sigma ^2\Sigma $. When $\Sigma \rightarrow \infty $, the SSM is said to be diffuse.

The KALCVF call computes the one-step prediction $\mb{z}_{t+1|t}$ and the filtered estimate $\mb{z}_{t|t}$, together with their covariance matrices $\bP _{t+1|t}$ and $\bP _{t|t}$, by using forward recursions. You can obtain the k-step prediction $\mb{z}_{t+k|t}$ and its covariance matrix $\bP _{t+k|t}$ by using the KALCVF call. The KALCVS call uses backward recursions to compute the smoothed estimate $\mb{z}_{t|T}$ and its covariance matrix $\bP _{t|T}$ when there are T observations in the complete data.

The KALDFF call produces one-step prediction of the state and the unobserved random vector $\delta $ along with their covariance matrices. The KALDFS call computes the smoothed estimate $\mb{z}_{t|T}$ and its covariance matrix $\bP _{t|T}$.