The SSM Procedure (Experimental)

Likelihood, Filtering, and Smoothing

The Kalman filter and smoother (KFS) algorithm is the main computational tool for using SSM for data analysis. This subsection briefly describes the basic quantities generated by this algorithm and their relationship to the output generated by the SSM procedure. For proper treatment of SSMs with a diffuse initial condition or when regression variables are present, a modified version of the traditional KFS, called diffuse Kalman filter and smoother (DKFS), is needed. A good discussion of the different variants of the traditional and diffuse KFS can be found in Durbin and Koopman (2001). The DKFS implemented in the SSM procedure closely follows the treatment in de Jong and Chu-Chun-Lin (2003). Additional details can be found in these references.

The state space model equations (see the section State Space Model and Notation) imply that the combined response data vector $\mb {Y} = (\mb {Y}_{1}, \mb {Y}_{2}, \ldots , \mb {Y}_{n})$ has a Gaussian probability distribution. This probability distribution is proper if $d$, the dimension of the diffuse vector $\pmb {\delta }$ in the initial condition, is 0 and if $k$, the number of regression variables in the observation equation, is also 0 (the regression parameter $\pmb {\beta }$ is also treated as a diffuse vector). Otherwise, this probability distribution is improper. The KFS algorithm is a combination of two iterative phases: a forward pass through the data, called filtering, and a backward pass through the data, called smoothing, that uses the quantities generated during filtering. One of the advantages of using the SSM formulation to analyze the time series data is its ability to handle the missing values in the response variables. The KFS algorithm appropriately handles the missing values in $\mb {Y}$. For additional information about how PROC SSM handles missing values, see the section Missing Values.

Filtering Pass

The filtering pass sequentially computes the quantities shown in Table 27.5 for $t = 1, 2, \ldots , n$ and $i = 1, 2, \ldots , q*p_{t}$.

Table 27.5: KFS: Filtering Phase

Quantity

Description

$\hat{y}_{t, i} = \mr {E}( y_{t, i} | y_{t, i-1}, \ldots y_{t, 1}, \mb {Y}_{t-1}, \ldots , \mb {Y}_{1} )$

One-step-ahead prediction of the response values

$\nu _{t,i} = y_{t, i} - \hat{y}_{t, i} $

One-step-ahead prediction residuals

$F_{t, i} = \mr {Var}( y_{t, i} | y_{t, i-1}, \ldots y_{t, 1}, \mb {Y}_{t-1}, \ldots , \mb {Y}_{1} )$

Variance of the one-step-ahead prediction

$\hat{\pmb {\alpha }}_{t, i} = \mr {E}( \pmb {\alpha }_{t} | y_{t, i-1}, \ldots y_{t, 1}, \mb {Y}_{t-1}, \ldots , \mb {Y}_{1} )$

One-step-ahead prediction of the state vector

$\mb {P}_{t, i} = \mr {Cov}( \pmb {\alpha }_{t} | y_{t, i-1}, \ldots y_{t, 1}, \mb {Y}_{t-1}, \ldots , \mb {Y}_{1} )$

Covariance of $\hat{\pmb {\alpha }}_{t, i}$

$\mb {b}_{t, i}$

$(d + k)$-dimensional vector

$\mb {S}_{t, i}$

$(d + k)$-dimensional symmetric matrix

$ \binom { \hat{\pmb {\delta }}}{ \hat{\pmb {\beta }}}_{t,i} = \mb {S}_{t, i}^{-1}\mb {b}_{t, i} $

Estimate of $\pmb {\delta }$ and $\pmb {\beta }$ by using the data up to $(t,i)$

$\mb {S}_{t, i}^{-1} $

Covariance of $ \binom { \hat{\pmb {\delta }}}{ \hat{\pmb {\beta }}}_{t,i}$


Here the notation $ \mr {E}( y_{t, i} | y_{t, i-1}, \ldots y_{t, 1}, \mb {Y}_{t-1}, \ldots , \mb {Y}_{1} )$ denotes the conditional expectation of $y_{t, i}$ given the history up to the index $(t, i-1)$: $ (y_{t, i-1}, \ldots y_{t, 1}, \mb {Y}_{t-1}, \ldots , \mb {Y}_{1})$. Similarly $ \mr {Var}( y_{t, i} | y_{t, i-1}, \ldots y_{t, 1}, \mb {Y}_{t-1}, \ldots , \mb {Y}_{1} )$ denotes the corresponding conditional variance. The quantity $\nu _{t,i} = y_{t, i} - \hat{y}_{t, i}$ is set to missing whenever $y_{t,i}$ is missing. Note that $\hat{y}_{t, i}$ are one-step-ahead forecasts only when the model has only one response variable and the data are a time series; in all other cases it is more appropriate to call them one-measurement-ahead forecasts (since the next measurement might be at the same time point). Despite this, $\hat{y}_{t, i}$ are called one-step-ahead predictions (and $\nu _{t,i}$ are called one-step-ahead residuals) throughout this document. In the diffuse case, the conditional expectations must be appropriately interpreted. The vector $\mb {b}_{t, i}$ and the matrix $\mb {S}_{t, i}$ contain some accumulated quantities that are needed for the estimation of $\pmb {\delta }$ and $\pmb {\beta }$. Of course, when $(d+k) = 0$ (the nondiffuse case), these quantities are not needed. In the diffuse case, because the matrix $\mb {S}_{t, i}$ is sequentially accumulated (starting at $t=1, i=1$), it might not be invertible until some $t= t_{*}, i=i_{*}$. The filtering process is called initialized after $t= t_{*}, i=i_{*}$. In some situations, this initialization might not happen even after the entire sample is processed—that is, the filtering process remains uninitialized. This can happen if the regression variables are collinear or if the data are not sufficient to estimate the initial condition $\pmb {\delta }$ for some other reason.

The filtering process is used for a variety of purposes. One important use of filtering is to compute the likelihood of the data. In the model-fitting phase, the unknown model parameters $\pmb {\theta }$ are estimated by maximum likelihood. This requires repeated evaluation of the likelihood at different trial values of $\pmb {\theta }$. After $\pmb {\theta }$ is estimated, it is treated as a known vector. The filtering process is used again with the fitted model in the forecasting phase, when the one-step-ahead forecasts and residuals based on the fitted model are provided. In addition, this filtering output is needed by the smoothing phase to produce the full-sample component estimates.

Likelihood Computation and Model Fitting Phase

In view of the Gaussian nature of the response vector, the likelihood of $ \mb {Y}$, $ \mb {L}( \mb {Y}, \pmb {\theta } )$, can be computed by using the prediction-error decomposition, which leads to the formula

\[  -2 \log \mb {L}( \mb {Y}, \pmb {\theta } ) = N_{0} * \log 2 \pi + \sum _{t=1}^{n} \sum _{i=1}^{q*p_{t}} ( \log F_{t, i} + \frac{\nu _{t,i}^{2} }{ F_{t, i} } ) - \log ( | \mb {S}_{n, p_{n}}^{-1} | ) - \mb {b}_{n, p_{n}}^{} \mb {S}_{n, p_{n}}^{-1} \mb {b}_{n, p_{n}}  \]

where $N_{0} = (N - k - d)$, $ | \mb {S}_{n, p_{n}}^{-1} | $ denotes the determinant of $\mb {S}_{n, p_{n}}^{-1}$, and $ \mb {b}_{n, p_{n}}^{} $ denotes the transpose of the column vector $ \mb {b}_{n, p_{n}}$. In the preceding formula, the terms that are associated with the missing response values $y_{t,i}$ are excluded and $N$ denotes the total number of nonmissing response values in the sample. If $\mb {S}_{n, p_{n}}$ is not invertible, then a generalized inverse is used in place of $\mb {S}_{n, p_{n}}^{-1}$, and $ | \mb {S}_{n, p_{n}}^{-1} | $ is computed based on the nonzero eigenvalues of $\mb {S}_{n, p_{n}}$. Moreover, in this case $N_{0} = N - \mr {Rank}(\mb {S}_{n, p_{n}})$. When $\mb {Y}$ has a proper distribution (that is, when $(d+k) = 0$), the terms that involve $ \mb {S}_{n, p_{n}}$ and $ \mb {b}_{n, p_{n}}$ are absent and the preceding likelihood is proper. Otherwise, it is called the diffuse likelihood or the restricted likelihood.

When the model specification contains any unknown parameters $\pmb {\theta }$, they are estimated by maximizing the preceding likelihood function. This is done by using a nonlinear optimization process that involves repeated evaluations of $ \mb {L}( \mb {Y}, \pmb {\theta } )$ at different values of $\pmb {\theta }$. The maximum likelihood (ML) estimate of $\pmb {\theta }$ is denoted by $\hat{\pmb {\theta }}$. When the restricted likelihood is used for computing $\hat{\pmb {\theta }}$, the estimate is called the restricted maximum likelihood (REML) estimate. Approximate standard errors of $\hat{\pmb {\theta }}$ are computed by taking the square root of the diagonal elements of its (approximate) covariance matrix. This covariance is computed as $-\mb {H}^{-1}$, where $\mb {H}$ is the Hessian (the matrix of the second-order partials) of $\log \mb {L}( \mb {Y}, \pmb {\theta } )$ evaluated at the optimum $\hat{\pmb {\theta }}$.

Let $\mi {dim}(\theta )$ denote the dimension of the parameter vector $\pmb {\theta }$. After the parameter estimation is completed, a table, called Likelihood Computation Summary is printed. It summarizes the likelihood calculations at $\hat{\pmb {\theta }}$ as shown in Table 27.6.

Table 27.6: Likelihood Computation Summary

Quantity

Formula

Nonmissing response values used

$N$

Estimated parameters

$\mi {dim}(\theta )$

Initialized diffuse state elements

$\mr {Rank}(\mb {S}_{n, p_{n}})$

Normalized residual sum of squares

$\sum _{t=1}^{n} \sum _{i=1}^{q*p_{t}} ( \frac{\nu _{t,i}^{2} }{ F_{t, i} } ) - \mb {b}_{n, p_{n}}^{} \mb {S}_{n, p_{n}}^{-1} \mb {b}_{n, p_{n}}$

Full log likelihood

$ \log \mb {L}( \mb {Y}, \hat{\pmb {\theta }} ) $


In addition, the Likelihood Based Information Criteria table reports a variety of information-based criteria, which are functions of $-2 \log \mb {L}( \mb {Y}, \hat{\pmb {\theta }} )$, $N_{0}$, and $\mi {dim}(\theta )$. Table 27.7 summarizes the reported information criteria in smaller-is-better form:

Table 27.7: Information Criteria

Criterion

Formula

Reference

AIC

$-2 \log \mb {L} + 2 \mi {dim}(\theta ) $

Akaike (1974)

AICC

$-2 \log \mb {L} + 2 \mi {dim}(\theta ) N_{0}/(N_{0}- \mi {dim}(\theta ) -1)$

Hurvich and Tsai (1989)

   

Burnham and Anderson (1998)

HQIC

$-2 \log \mb {L} + 2 \mi {dim}(\theta ) \log \log (N_{0} )$

Hannan and Quinn (1979)

BIC

$-2 \log \mb {L} + \mi {dim}(\theta ) \log ( N_{0} )$

Schwarz (1978)

CAIC

$-2 \log \mb {L} + \mi {dim}(\theta ) (\log ( N_{0} ) + 1)$

Bozdogan (1987)


Forecasting Phase

After the model-fitting phase, the filtering process is repeated again to produce the model-based one-step-ahead response variable forecasts ($\hat{y}_{t, i} $), residuals ($\nu _{t,i}$), and their standard errors ($\sqrt { F_{t, i}}$). In addition, one-step-ahead forecasts of the components that are specified in the MODEL statements, and any other user-defined linear combinations of $\pmb {\alpha }_{t}$, are also produced. These forecasts are set to missing as long as the index $t < t_{*}$ (that is, until the filtering process is initialized). If the filtering process remains uninitialized, then all the quantities that are related to the one-step-ahead forecast (such as $\hat{y}_{t, i} $ and $\nu _{t,i}$) are reported as missing. When the fitted model is appropriate, the one-step-ahead residuals $\nu _{t,i}$ form a sequence of uncorrelated normal variates. This fact can be used during model diagnostic process.

Smoothing Phase

After the filtering phase of KFS produces the one-step-ahead predictions of the response variables and the underlying state vectors, the smoothing phase of KFS produces the full-sample versions of these quantities—that is, rather than using the history up to $(t, i-1)$, the entire sample $\mb {Y}$ is used. The smoothing phase of KFS is a backward algorithm, which begins at $t = n$ and $i = q * p_{n}$ and goes back toward $t=1$ and $i=1$. It produces the following quantities:

Table 27.8: KFS: Smoothing Phase

Quantity

Description

$\tilde{y}_{t, i} = \mr {E}( y_{t, i} | \mb {Y} )$

Interpolated response value

$\tilde{F}_{t, i} = \mr {Var}( y_{t, i} | \mb {Y} )$

Variance of the interpolated response value

$\tilde{\pmb {\alpha }}_{t} = \mr {E}( \pmb {\alpha }_{t} | \mb {Y} )$

Full-sample estimate of the state vector

$\tilde{\mb {P}}_{t} = \mr {Cov}( \pmb {\alpha }_{t} | \mb {Y} )$

Covariance of $\tilde{\pmb {\alpha }}_{t}$

$ \binom { \tilde{\pmb {\delta }}}{ \tilde{\pmb {\beta }}} = \mb {S}_{n, p_{n}}^{-1}\mb {b}_{n, p_{n}} $

Full-sample estimate of $\pmb {\delta }$ and $\pmb {\beta }$

$\mb {S}_{n, p_{n}}^{-1} $

Covariance of $\binom { \tilde{\pmb {\delta }}}{ \tilde{\pmb {\beta }}}$

$\mr {AO}_{t, i} = y_{t, i} - \mr {E}( y_{t, i} | \mb {Y}^{t,i} )$

Estimate of additive outlier

$\mr {g}_{t, i} $

Variance of $\mr {AO}_{t, i}$

${\rho _{t}^{*}}^{2}$

Maximal state shock chi-square statistic


Note that if ${y}_{t, i}$ is not missing, then $\tilde{y}_{t, i} = \mr {E}( y_{t, i} | \mb {Y} ) = {y}_{t, i}$ and $\tilde{F}_{t, i} = \mr {Var}( y_{t, i} | \mb {Y} ) = 0$ because $ {y}_{t, i}$ is completely known, given $\mb {Y}$. Therefore, $\tilde{y}_{t, i}$ provides nontrivial information only when ${y}_{t, i}$ is missing—in which case $\tilde{y}_{t, i}$ represents the best estimate of ${y}_{t, i}$ based on the available data. The full-sample estimates of components that are specified in the model equations are based on the corresponding linear combinations of $\tilde{\pmb {\alpha }}_{t}$. Similarly, their standard errors are computed by using appropriate functions of $\tilde{\mb {P}}_{t}$. The estimate of the additive outlier, $\mr {AO}_{t, i} = y_{t, i} - \mr {E}( y_{t, i} | \mb {Y}^{t,i} )$, is the difference between the observed response value $y_{t, i}$ and its estimate or prediction by using all the data except $y_{t, i}$, which is denoted by $\mb {Y}^{t,i}$. The estimate $\mr {AO}_{t, i}$ is missing when $y_{t, i}$ is missing. $\mr {AO}_{t, i}$ is also called the prediction error—as opposed to the one-step-ahead residual, $\nu _{t,i}$. Similar to $\nu _{t,i}$, the prediction errors can be used in checking the model adequacy. The prediction errors are normally distributed; however, unlike $\nu _{t,i}$, they are not serially uncorrelated. You can request the printing of the prediction error sum of squares (PRESS) by specifying the PRESS option in the OUTPUT statement. The maximal state shock chi-square statistic, ${\rho _{t}^{*}}^{2}$, is computed at each distinct time point and is described in de Jong and Penzer (1998) (the second term in the right-hand side of Equation 14). Loosely speaking, ${\rho _{t}^{*}}^{2}$ is a measure of the magnitude of unexpected change in the underlying state at time $t$. A large value of ${\rho _{t}^{*}}^{2}$, which follows chi-square distribution with degrees of freedom equal to $m$ (the state size), can signify change in the data generation mechanism at time $t$. For more information about the computation, precise definitions of additive outliers and maximal state shocks, and their use in the detection of structural change in the observation process, see de Jong and Penzer (1998). The computation of ${\rho _{t}^{*}}^{2}$ can be expensive for large state size and is not done by default. You can turn on its computation by specifying the MAXSHOCK option in the OUTPUT statement.

If the filtering process remains uninitialized until the end of the sample (that is, if $\mb {S}_{n, p_{n}}$ is not invertible), some linear combinations of $\pmb {\delta }$ and $\pmb {\beta }$ are not estimable. This, in turn, implies that some linear combinations of $\pmb {\alpha }_{t}$ are also inestimable. These inestimable quantities are reported as missing. For more information about the estimability of the state effects, see Selukar (2010).