The UCM Procedure

The UCMs as State Space Models

The UCMs considered in PROC UCM can be thought of as special cases of more general models, called (linear) Gaussian state space models (GSSM). A GSSM can be described as follows:

\begin{eqnarray*}  y_ t &  = &  Z_ t \alpha _ t \\ \alpha _{t+1} &  = &  T_ t \alpha _ t + \zeta _{t+1} , \;  \;  \;  \;  \;  \zeta _ t \sim \textrm{N} ( 0, Q_ t ) \\ \alpha _1 &  \sim &  \textrm{N} ( 0, P ) \end{eqnarray*}

The first equation, called the observation equation, relates the response series $y_ t$ to a state vector $\alpha _ t$ that is usually unobserved. The second equation, called the state equation, describes the evolution of the state vector in time. The system matrices $Z_ t$ and $T_ t$ are of appropriate dimensions and are known, except possibly for some unknown elements that become part of the parameter vector of the model. The noise series $\zeta _ t$ consists of independent, zero-mean, Gaussian vectors with covariance matrices $Q_ t$. For most of the UCMs considered here, the system matrices $Z_ t$ and $T_ t$, and the noise covariances $Q_ t$, are time invariant—that is, they do not depend on time. In a few cases, however, some or all of them can depend on time. The initial state vector $\alpha _1$ is assumed to be independent of the noise series, and its covariance matrix $P$ can be partially diffuse. A random vector has a partially diffuse covariance matrix if it can be partitioned such that one part of the vector has a properly defined probability distribution, while the covariance matrix of the other part is infinite—that is, you have no prior information about this part of the vector. The covariance of the initial state $\alpha _1$ is assumed to have the following form:

\[  P = P_* + \kappa P_{\infty }  \]

where $P_*$ and $P_{\infty }$ are nonnegative definite, symmetric matrices and $\kappa $ is a constant that is assumed to be close to $\infty $. In the case of UCMs considered here, $P_{\infty }$ is always a diagonal matrix that consists of zeros and ones, and, if a particular diagonal element of $P_{\infty }$ is one, then the corresponding row and column in $P_*$ are zero.

The state space formulation of a UCM has many computational advantages. In this formulation there are convenient algorithms for estimating and forecasting the unobserved states $\{  \alpha _ t \} $ by using the observed series $\{  y_ t \} $. These algorithms also yield the in-sample and out-of-sample forecasts and the likelihood of $\{  y_ t \} $. The state space representation of a UCM does not need to be unique. In the representation used here, the unobserved components in the UCM often appear as elements of the state vector. This makes the elements of the state interpretable and, more important, the sample estimates and forecasts of these unobserved components are easily obtained. For additional information about the computational aspects of the state space modeling, see Durbin and Koopman (2001). Next, some notation is developed to describe the essential quantities computed during the analysis of the state space models.

Let $\{  y_ t , t = 1, \ldots , n \} $ be the observed sample from a series that satisfies a state space model. Next, for $1 \leq t \leq n$, let the one-step-ahead forecasts of the series, the states, and their variances be defined as follows, using the usual notation to denote the conditional expectation and conditional variance:

\begin{eqnarray*}  \hat{\alpha }_ t &  = &  \textrm{E}( \alpha _ t | y_1 , y_2 , \ldots , y_{t-1} ) \nonumber \\ \Gamma _ t &  = &  \textrm{Var}( \alpha _ t | y_1 , y_2 , \ldots , y_{t-1} ) \nonumber \\ \hat{y}_ t &  = &  \textrm{E}( y_ t | y_1 , y_2 , \ldots , y_{t-1} ) \nonumber \\ F_ t &  = &  \textrm{Var}( y_ t | y_1 , y_2 , \ldots , y_{t-1} ) \nonumber \end{eqnarray*}

These are also called the filtered estimates of the series and the states. Similarly, for $t \geq 1$, let the following denote the full-sample estimates of the series and the state values at time $t$:

\begin{eqnarray*}  \tilde{\alpha }_ t &  = &  \textrm{E}( \alpha _ t | y_1 , y_2 , \ldots , y_ n ) \nonumber \\ \Delta _ t &  = &  \textrm{Var}( \alpha _ t | y_1 , y_2 , \ldots , y_ n ) \nonumber \\ \tilde{y}_ t &  = &  \textrm{E}( y_ t | y_1 , y_2 , \ldots , y_ n ) \nonumber \\ G_ t &  = &  \textrm{Var}( y_ t | y_1 , y_2 , \ldots , y_ n ) \nonumber \end{eqnarray*}

If the time $t$ is in the historical period— that is, if $ 1 \leq t \leq n$— then the full-sample estimates are called the smoothed estimates, and if $t$ lies in the future then they are called out-of-sample forecasts. Note that if $ 1 \leq t \leq n$, then $\tilde{y}_ t = y_ t$ and $G_ t = 0$, unless $y_ t$ is missing.

All the filtered and smoothed estimates ($\hat{\alpha }_ t , \tilde{\alpha }_ t , \ldots , G_ t$, and so on) are computed by using the Kalman filtering and smoothing (KFS) algorithm, which is an iterative process. If the initial state is diffuse, as is often the case for the UCMs, its treatment requires modification of the traditional KFS, which is called the diffuse KFS (DKFS). The details of DKFS implemented in the UCM procedure can be found in de Jong and Chu-Chun-Lin (2003). Additional information on the state space models can be found in Durbin and Koopman (2001). The likelihood formulas described in this section are taken from the latter reference.

In the case of diffuse initial condition, the effect of the improper prior distribution of $\alpha _1$ manifests itself in the first few filtering iterations. During these initial filtering iterations the distribution of the filtered quantities remains diffuse; that is, during these iterations the one-step-ahead series and state forecast variances $F_ t$ and $\Gamma _ t$ have the following form:

\begin{eqnarray*}  F_ t &  = &  F_{* t} + \kappa F_{\infty t} \nonumber \\ \Gamma _ t &  = &  \Gamma _{* t} + \kappa \Gamma _{\infty t} \nonumber \end{eqnarray*}

The actual number of iterations—say, $I$— affected by this improper prior depends on the nature of the vectors $Z_ t$, the number of nonzero diagonal elements of $P_{\infty }$, and the pattern of missing values in the dependent series. After $I$ iterations, $\Gamma _{\infty t}$ and $F_{\infty t}$ become zero and the one-step-ahead series and state forecasts have proper distributions. These first $I$ iterations constitute the initialization phase of the DKFS algorithm. The post-initialization phase of the DKFS and the traditional KFS is the same. In the state space modeling literature the pre-initialization and post-initialization phases are some times called pre-collapse and post-collapse phases of the diffuse Kalman filtering. In certain missing value patterns it is possible for $I$ to exceed the sample size; that is, the sample information can be insufficient to create a proper prior for the filtering process. In these cases, parameter estimation and forecasting is done on the basis of this improper prior, and some or all of the series and component forecasts can have infinite variances (or zero precision). The forecasts that have infinite variance are set to missing. The same situation can occur if the specified model contains components that are essentially multicollinear. In these situations no residual analysis is possible; in particular, no residuals-based goodness-of-fit statistics are produced.

The log likelihood of the sample ($L_{\infty }$), which takes account of this diffuse initialization step, is computed by using the one-step-ahead series forecasts as follows

\[  L_{\infty } ( y_1 , \ldots , y_ n ) = - \frac{( n - d )}{2} \log 2 \pi - \frac{1}{2} \sum _{t=1}^{I} w_ t - \frac{1}{2} \sum _{t=I+1}^{n} ( \log F_ t + \frac{ \nu _{t}^{2} }{F_ t} )  \]

where $d$ is the number of diffuse elements in the initial state $\alpha _1$, $\nu _ t = y_ t - Z_ t \hat{\alpha }_ t$ are the one-step-ahead residuals, and

\begin{eqnarray*}  w_ t &  = &  \log F_{\infty t} \;  \;  \;  \;  \;  \;  \;  \;  \;  \;  \;  \;  \;  \;  \textrm{if} \;  F_{\infty t} > 0 \nonumber \\ &  = &  \log F_{* t} + \frac{ \nu _{t}^{2} }{ F_{* t} } \;  \;  \;  \;  \;  \textrm{if} \;  F_{\infty t} = 0 \nonumber \end{eqnarray*}

If $y_ t$ is missing at some time $t$, then the corresponding summand in the log likelihood expression is deleted, and the constant term is adjusted suitably. Moreover, if the initialization step does not complete—that is, if $I$ exceeds the sample size— then the value of $d$ is reduced to the number of diffuse states that are successfully initialized.

The portion of the log likelihood that corresponds to the post-initialization period is called the nondiffuse log likelihood ($L_{0}$). The nondiffuse log likelihood is given by

\[  L_{0} ( y_1 , \ldots , y_ n ) = - \frac{1}{2} \sum _{t=I+1}^{n} ( \log F_ t + \frac{ \nu _{t}^{2} }{F_ t} )  \]

In the case of UCMs considered in PROC UCM, it often happens that the diffuse part of the likelihood, $\sum _{t=1}^{I} w_ t$, does not depend on the model parameters, and in these cases the maximization of nondiffuse and diffuse likelihoods is equivalent. However, in some cases, such as when the model consists of dependent lags, the diffuse part does depend on the model parameters. In these cases the maximization of the diffuse and nondiffuse likelihood can produce different parameter estimates.

In some situations it is convenient to reparameterize the nondiffuse initial state covariance $P_*$ as $\sigma ^{2} P_*$ and the state noise covariance $Q_ t$ as $\sigma ^{2} Q_ t$ for some common scalar parameter $\sigma ^{2}$. In this case the preceding log-likelihood expression, up to a constant, can be written as

\[  L_{\infty } ( y_1 , \ldots , y_ n ) = - \frac{1}{2} \sum _{t=1}^{I} w_ t - \frac{1}{2} \sum _{t=I+1}^{n} \log F_ t - \frac{1}{2 \sigma ^{2}} \sum _{t=I+1}^{n} \frac{ \nu _{t}^{2} }{F_ t} - \frac{(n - d)}{2} \log \sigma ^{2}  \]

Solving analytically for the optimum, the maximum likelihood estimate of $\sigma ^{2}$ can be shown to be

\[  \hat{\sigma }^{2} = \frac{1}{(n-d)} \sum _{t=I+1}^{n} \frac{ \nu _{t}^{2} }{F_ t}  \]

When this expression of $\sigma ^{2}$ is substituted back into the likelihood formula, an expression called the profile likelihood ($L_{profile}$) of the data is obtained:

\[  -2 L_{profile} ( y_1 , \ldots , y_ n ) = \sum _{t=1}^{I} w_ t + \sum _{t=I+1}^{n} \log F_ t + (n - d) \log ( \sum _{t=I+1}^{n} \frac{ \nu _{t}^{2} }{F_ t} )  \]

In some situations the parameter estimation is done by optimizing the profile likelihood (see the section Parameter Estimation by Profile Likelihood Optimization and the PROFILE option in the ESTIMATE statement).

In the remainder of this section the state space formulation of UCMs is further explained by using some particular UCMs as examples. The examples show that the state space formulation of the UCMs depends on the components in the model in a simple fashion; for example, the system matrix $T$ is usually a block diagonal matrix with blocks that correspond to the components in the model. The only exception to this pattern is the UCMs that consist of the lags of dependent variable. This case is considered at the end of the section.

In what follows, $Diag \left[ a , b, \ldots \;  \right] $ denotes a diagonal matrix with diagonal entries $ \left[ a , b, \ldots \;  \right]$, and the transpose of a matrix $T$ is denoted as $T^{}$.

Locally Linear Trend Model

Recall that the dynamics of the locally linear trend model are

\begin{eqnarray*}  y_ t &  = &  \mu _{t} + \epsilon _{t} \nonumber \\ \mu _{t} &  = &  \mu _{t-1} + \beta _{t-1} + \eta _ t \; \; \; \;  \nonumber \\ \beta _{t} &  = &  \beta _{t-1} + \xi _{t} \nonumber \end{eqnarray*}

Here $y_{t}$ is the response series and $\epsilon _{t} , \eta _{t}, $ and $\xi _ t$ are independent, zero-mean Gaussian disturbance sequences with variances $\sigma _{\epsilon }^{2} , \sigma _{\eta }^{2}$, and $\sigma _{\xi }^{2}$, respectively. This model can be formulated as a state space model where the state vector $\alpha _ t = \left[ \;  \epsilon _{t} \;  \mu _{t} \;  \beta _{t} \;  \right]^{}$ and the state noise $\zeta _ t = \left[ \;  \epsilon _{t} \;  \eta _{t} \;  \xi _{t} \;  \right]^{}$. Note that the elements of the state vector are precisely the unobserved components in the model. The system matrices $T $ and $ Z$ and the noise covariance $Q$ corresponding to this choice of state and state noise vectors can be seen to be time invariant and are given by

\[  Z = \left[ \;  1 \;  1 \;  0 \;  \right] , \; \;  T = \left[ \begin{array}{ccc} 0 \;  0 \;  0 \\ 0 \;  1 \;  1 \\ 0 \;  0 \;  1 \end{array} \right] \; \;  \mr {and} \; \;  Q = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{\eta }^{2} , \sigma _{\xi }^{2} \right]  \]

The distribution of the initial state vector $\alpha _1$ is diffuse, with $P_* = Diag \left[ \sigma _{\epsilon }^{2} , 0, 0 \right] $ and $P_{\infty } = Diag \left[ 0 , 1, 1 \right] $. The parameter vector $\theta $ consists of all the disturbance variances—that is, $\theta = ( \sigma _{\epsilon }^{2} , \sigma _{\eta }^{2}, \sigma _{\xi }^{2} )$.

Basic Structural Model

The basic structural model (BSM) is obtained by adding a seasonal component, $\gamma _ t$, to the local level model. In order to economize on the space, the state space formulation of a BSM with a relatively short season length, season length = 4 (quarterly seasonality), is considered here. The pattern for longer season lengths such as 12 (monthly) and 52 (weekly) is easy to see.

Let us first consider the dummy form of seasonality. In this case the state and state noise vectors are $\alpha _ t = \left[ \;  \epsilon _{t} \;  \mu _{t} \;  \beta _{t} \;  \gamma _{1,t} \;  \gamma _{2,t} \;  \gamma _{3,t} \; \right]^{}$ and $\zeta _ t = \left[ \;  \epsilon _{t} \;  \eta _{t} \;  \xi _{t} \;  \omega _ t \;  0 \;  0 \;  \right]^{}$, respectively. The first three elements of the state vector are the irregular, level, and slope components, respectively. The remaining elements, $\gamma _{i,t}$, are lagged versions of the seasonal component $\gamma _ t$. $\gamma _{1,t}$ corresponds to lag zero—that is, the same as $\gamma _ t$, $\gamma _{2,t}$ to lag 1 and $\gamma _{3,t}$ to lag 2. The system matrices are

\[  Z = \left[ \;  1 \;  1 \;  0 \;  1 \;  0 \;  0 \; \right] , \; \;  T = \left[ \begin{tabular}{cccccc} 0   &  0   &  0   &  0   &  0   &  0   \\ 0   &  1   &  1   &  0   &  0   &  0   \\ 0   &  0   &  1   &  0   &  0   &  0   \\ 0   &  0   &  0   &  –1   &  –1   &  –1   \\ 0   &  0   &  0   &  1   &  0   &  0   \\ 0   &  0   &  0   &  0   &  1   &  0   \end{tabular} \right]  \]

and $Q = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{\eta }^{2} , \sigma _{\xi }^{2} , \sigma _{\omega }^{2} , 0 , 0 \right]$. The distribution of the initial state vector $\alpha _1$ is diffuse, with $P_* = Diag \left[ \sigma _{\epsilon }^{2} , 0, 0, 0, 0, 0 \right] $ and $P_{\infty } = Diag \left[ 0 , 1, 1, 1, 1, 1 \right] $.

In the case of the trigonometric type of seasonality, $\alpha _ t = \left[ \;  \epsilon _{t} \;  \mu _{t} \;  \beta _{t} \;  \gamma _{1,t} \;  \gamma ^{*}_{1,t} \;  \gamma _{2,t} \; \right]^{}$ and $\zeta _ t = \left[ \;  \epsilon _{t} \;  \eta _{t} \;  \xi _{t} \;  \omega _{1,t} \;  \omega ^{*}_{1,t}\;  \omega _{2,t} \;  \right]^{}$. The disturbance sequences, $ \omega _{j,t}, 1 \leq j \leq 2$, and $\omega ^{*}_{1,t}$, are independent, zero-mean, Gaussian sequences with variance $\sigma _{\omega }^{2}$. The system matrices are

\[  Z = \left[ \;  1 \;  1 \;  0 \;  1 \;  0 \;  1 \; \right] , \; \;  T = \left[ \begin{tabular}{cccccc} 0   &  0   &  0   &  0   &  0   &  0   \\ 0   &  1   &  1   &  0   &  0   &  0   \\ 0   &  0   &  1   &  0   &  0   &  0   \\ 0   &  0   &  0   &  $\cos \lambda _1$   &  $\sin \lambda _1$   &  0   \\ 0   &  0   &  0   &  $ -\sin \lambda _1$   &  $ \cos \lambda _1$   &  0   \\ 0   &  0   &  0   &  0   &  0   &  $ \cos \lambda _2 $   \end{tabular} \right]  \]

and $Q = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{\eta }^{2} , \sigma _{\xi }^{2} , \sigma _{\omega }^{2} , \sigma _{\omega }^{2} , \sigma _{\omega }^{2} \right]$. Here $\lambda _ j = ( 2 \pi j )/4$. The distribution of the initial state vector $\alpha _1$ is diffuse, with $P_* = Diag \left[ \sigma _{\epsilon }^{2} , 0, 0, 0, 0, 0 \right] $ and $P_{\infty } = Diag \left[ 0 , 1, 1, 1, 1, 1 \right] $. The parameter vector in both the cases is $\theta = ( \sigma _{\epsilon }^{2} , \sigma _{\eta }^{2}, \sigma _{\xi }^{2}, \sigma _{\omega }^{2} )$.

Seasons with Blocked Seasonal Values

Block seasonals are special seasonal components that impose a special block structure on the seasonal effects. Let us consider a BSM with monthly seasonality that has a quarterly block structure—that is, months within the same quarter are assumed to have identical effects except for some random perturbation. Such a seasonal component is a block seasonal with block size $m$ equal to 3 and the number of blocks $k$ equal to 4. The state space structure for such a model with dummy-type seasonality is as follows: The state and state noise vectors are $\alpha _ t = \left[ \;  \epsilon _{t} \;  \mu _{t} \;  \beta _{t} \;  \gamma _{1,t} \;  \gamma _{2,t} \;  \gamma _{3,t} \; \right]^{}$ and $\zeta _ t = \left[ \;  \epsilon _{t} \;  \eta _{t} \;  \xi _{t} \;  \omega _ t \;  0 \;  0 \;  \right]^{}$, respectively. The first three elements of the state vector are the irregular, level, and slope components, respectively. The remaining elements, $\gamma _{i,t}$, are lagged versions of the seasonal component $\gamma _ t$. $\gamma _{1,t}$ corresponds to lag zero—that is, the same as $\gamma _ t$, $\gamma _{2,t}$ to lag $m$ and $\gamma _{3,t}$ to lag $2m$. All the system matrices are time invariant, except the matrix $T$. They can be seen to be $Z = \left[ \;  1 \;  1 \;  0 \;  1 \;  0 \;  0 \; \right]$, $Q = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{\eta }^{2} , \sigma _{\xi }^{2} , \sigma _{\omega }^{2} , 0 , 0 \right]$, and

\[  T_{t} = \left[ \begin{tabular}{cccccc} 0   &  0   &  0   &  0   &  0   &  0   \\ 0   &  1   &  1   &  0   &  0   &  0   \\ 0   &  0   &  1   &  0   &  0   &  0   \\ 0   &  0   &  0   &  –1   &  –1   &  –1   \\ 0   &  0   &  0   &  1   &  0   &  0   \\ 0   &  0   &  0   &  0   &  1   &  0   \end{tabular} \right]  \]

when $t$ is a multiple of the block size $m$, and

\[  T_{t} = \left[ \begin{tabular}{cccccc} 0   &  0   &  0   &  0   &  0   &  0   \\ 0   &  1   &  1   &  0   &  0   &  0   \\ 0   &  0   &  1   &  0   &  0   &  0   \\ 0   &  0   &  0   &  1   &  0   &  0   \\ 0   &  0   &  0   &  0   &  1   &  0   \\ 0   &  0   &  0   &  0   &  0   &  1   \end{tabular} \right]  \]

otherwise. Note that when $t$ is not a multiple of $m$, the portion of the $T_ t$ matrix corresponding to the seasonal is identity. The distribution of the initial state vector $\alpha _1$ is diffuse, with $P_* = Diag \left[ \sigma _{\epsilon }^{2} , 0, 0, 0, 0, 0 \right] $ and $P_{\infty } = Diag \left[ 0 , 1, 1, 1, 1, 1 \right] $.

Similarly in the case of the trigonometric form of seasonality, $\alpha _ t = \left[ \;  \epsilon _{t} \;  \mu _{t} \;  \beta _{t} \;  \gamma _{1,t} \;  \gamma ^{*}_{1,t} \;  \gamma _{2,t} \; \right]^{}$ and $\zeta _ t = \left[ \;  \epsilon _{t} \;  \eta _{t} \;  \xi _{t} \;  \omega _{1,t} \;  \omega ^{*}_{1,t}\;  \omega _{2,t} \;  \right]^{}$. The disturbance sequences, $ \omega _{j,t}, 1 \leq j \leq 2$, and $\omega ^{*}_{1,t}$, are independent, zero-mean, Gaussian sequences with variance $\sigma _{\omega }^{2}$. $ Z = \left[ \;  1 \;  1 \;  0 \;  1 \;  0 \;  1 \; \right] $, $Q = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{\eta }^{2} , \sigma _{\xi }^{2} , \sigma _{\omega }^{2} , \sigma _{\omega }^{2} , \sigma _{\omega }^{2} \right]$, and

\[  T_{t} = \left[ \begin{tabular}{cccccc} 0   &  0   &  0   &  0   &  0   &  0   \\ 0   &  1   &  1   &  0   &  0   &  0   \\ 0   &  0   &  1   &  0   &  0   &  0   \\ 0   &  0   &  0   &  $\cos \lambda _1$   &  $\sin \lambda _1$   &  0   \\ 0   &  0   &  0   &  $ -\sin \lambda _1$   &  $ \cos \lambda _1$   &  0   \\ 0   &  0   &  0   &  0   &  0   &  $ \cos \lambda _2 $   \end{tabular} \right]  \]

when $t$ is a multiple of the block size $m$, and

\[  T_{t} = \left[ \begin{tabular}{cccccc} 0   &  0   &  0   &  0   &  0   &  0   \\ 0   &  1   &  1   &  0   &  0   &  0   \\ 0   &  0   &  1   &  0   &  0   &  0   \\ 0   &  0   &  0   &  1   &  0   &  0   \\ 0   &  0   &  0   &  0   &  1   &  0   \\ 0   &  0   &  0   &  0   &  0   &  1   \end{tabular} \right]  \]

otherwise. As before, when $t$ is not a multiple of $m$, the portion of the $T_ t$ matrix corresponding to the seasonal is identity. Here $\lambda _ j = ( 2 \pi j )/4$. The distribution of the initial state vector $\alpha _1$ is diffuse, with $P_* = Diag \left[ \sigma _{\epsilon }^{2} , 0, 0, 0, 0, 0 \right] $ and $P_{\infty } = Diag \left[ 0 , 1, 1, 1, 1, 1 \right] $. The parameter vector in both the cases is $\theta = ( \sigma _{\epsilon }^{2} , \sigma _{\eta }^{2}, \sigma _{\xi }^{2}, \sigma _{\omega }^{2} )$.

Cycles and Autoregression

The preceding examples have illustrated how to build a state space model corresponding to a UCM that includes components such as irregular, trend, and seasonal. There you can see that the state vector and the system matrices have a simple block structure with blocks corresponding to the components in the model. Therefore, here only a simple model consisting of a single cycle and an irregular component is considered. The state space form for more complex UCMs consisting of multiple cycles and other components can be easily deduced from this example.

Recall that a stochastic cycle $\psi _ t$ with frequency $\lambda $, $0 < \lambda < \pi $, and damping coefficient $\rho $ can be modeled as

\[  \left[ \begin{array}{c} \psi _{t} \\ \psi ^{*}_{t} \end{array} \right] = \rho \left[ \begin{array}{lr} \cos \lambda &  \sin \lambda \\ - \sin \lambda &  \cos \lambda \end{array} \right] \left[ \begin{array}{c} \psi _{t-1} \\ \psi ^{*}_{t-1} \end{array} \right] + \left[ \begin{array}{c} \nu _{t} \\ \nu ^{*}_{t} \end{array} \right]  \]

where $\nu _{t}$ and $\nu ^{*}_{t}$ are independent, zero-mean, Gaussian disturbances with variance $\sigma _{\nu }^{2}$. In what follows, a state space form for a model consisting of such a stochastic cycle and an irregular component is given.

The state vector $\alpha _ t = \left[ \;  \epsilon _{t} \;  \psi _{t} \;  \psi ^{*}_{t} \;  \right]^{}$, and the state noise vector $\zeta _ t = \left[ \;  \epsilon _{t} \;  \nu _{t} \;  \nu ^{*}_{t} \;  \right]^{}$. The system matrices are

\[  Z = \left[ \;  1 \;  1 \;  0 \; \right] \; \;  T = \left[ \begin{tabular}{ccc} 0   &  0   &  0   \\ 0   &  $\rho \cos \lambda $   &  $\rho \sin \lambda $   \\ 0   &  $-\rho \sin \lambda $   &  $\rho \cos \lambda $   \end{tabular} \right] \; \;  Q = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{\nu }^{2} , \sigma _{\nu }^{2} \right]  \]

The distribution of the initial state vector $\alpha _1$ is proper, with $P_* = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{\psi }^{2} , \sigma _{\psi }^{2} \right] $, where $\sigma _{\psi }^{2} = \sigma _{\nu }^{2} (1 - \rho ^{2} )^{-1}$. The parameter vector $\theta = ( \sigma _{\epsilon }^{2} , \rho , \lambda , \sigma _{\nu }^{2} )$.

An autoregression $r_ t$ can be considered as a special case of cycle with frequency $\lambda $ equal to $0$ or $\pi $. In this case the equation for $\psi ^{*}_{t}$ is not needed. Therefore, for a UCM consisting of an autoregressive component and an irregular component, the state space model simplifies to the following form.

The state vector $\alpha _ t = \left[ \;  \epsilon _{t} \;  r_{t} \;  \right]^{}$, and the state noise vector $\zeta _ t = \left[ \;  \epsilon _{t} \;  \nu _{t} \;  \right]^{}$. The system matrices are

\[  Z = \left[ \;  1 \;  1 \;  \right] , \; \;  T = \left[ \begin{tabular}{cc} 0   &  0   \\ 0   &  $\rho $   \end{tabular} \right] \; \;  \mr {and} \; \;  Q = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{\nu }^{2} \right]  \]

The distribution of the initial state vector $\alpha _1$ is proper, with $P_* = Diag \left[ \sigma _{\epsilon }^{2} , \sigma _{r}^{2} \right] $, where $\sigma _{r}^{2} = \sigma _{\nu }^{2} (1 - \rho ^{2} )^{-1}$. The parameter vector $\theta = ( \sigma _{\epsilon }^{2} , \rho , \sigma _{\nu }^{2} )$.

Incorporating Predictors of Different Kinds

In the UCM procedure, predictors can be incorporated in a UCM in a variety of ways: simple time-invariant linear predictors are specified in the MODEL statement, predictors with time-varying coefficients can be specified in the RANDOMREG statement, and predictors that have a nonlinear relationship with the response variable can be specified in the SPLINEREG statement. As with earlier examples, how to obtain a state space form of a UCM consisting of such variety of predictors is illustrated using a simple special case. Consider a random walk trend model with predictors $x, u_{1}, u_{2}$, and $v$. Let us assume that $x$ is a simple regressor specified in the MODEL statement, $u_1$ and $u_2$ are random regressors with time-varying regression coefficients that are specified in the same RANDOMREG statement, and $v$ is a nonlinear regressor specified on a SPLINEREG statement. Let us further assume that the spline associated with $v$ has degree one and is based on two internal knots. As explained in the section SPLINEREG Statement, using $v$ is equivalent to using $(nknots + degree) = (2+1) = 3$ derived (random) regressors: say, $s_{1}, s_{2}, s_{3}$. In all there are $(1 + 2 + 3) = 6$ regressors, the first one being a simple regressor and the others being time-varying coefficient regressors. The time-varying regressors are in two groups, the first consisting of $u_1$ and $u_2$ and the other consisting of $s_{1}, s_{2}$, and $s_{3}$. The dynamics of this model are as follows:

\begin{eqnarray*}  y_ t &  = &  \mu _{t} + \beta x_{t} + \kappa _{1t} u_{1t} + \kappa _{2t} u_{2t} + \sum _{i=1}^{3} \gamma _{it} s_{it} + \epsilon _{t} \nonumber \\ \mu _{t} &  = &  \mu _{t-1} + \eta _ t \nonumber \\ \kappa _{1t} &  = &  \kappa _{1 (t-1)} + \xi _{1t} \nonumber \\ \kappa _{2t} &  = &  \kappa _{2 (t-1)} + \xi _{2t} \nonumber \\ \gamma _{1t} &  = &  \gamma _{1 (t-1)} + \zeta _{1t} \nonumber \\ \gamma _{2t} &  = &  \gamma _{2 (t-1)} + \zeta _{2t} \nonumber \\ \gamma _{3t} &  = &  \gamma _{3 (t-1)} + \zeta _{3t} \nonumber \end{eqnarray*}

All the disturbances $\epsilon _{t}, \eta _{t}, \xi _{1t}, \xi _{2t}, \zeta _{1t}, \zeta _{2t},$ and $\zeta _{3t} $ are independent, zero-mean, Gaussian variables, where $\xi _{1t}, \xi _{2t}$ share a common variance parameter $\sigma _{\xi }^{2}$ and $\zeta _{1t}, \zeta _{2t},\zeta _{3t}$ share a common variance $\sigma _{\zeta }^{2}$. These dynamics can be captured in the state space form by taking state $\alpha _ t = \left[ \;  \epsilon _{t} \;  \mu _{t} \;  \beta \;  \kappa _{1t} \;  \kappa _{2t} \;  \gamma _{1t} \;  \gamma _{2t} \;  \gamma _{3t} \; \right]^{}$, state disturbance $\zeta _ t = \left[ \;  \epsilon _{t} \;  \eta _{t} \;  0 \;  \xi _{1t} \;  \xi _{2t} \;  \zeta _{1t} \;  \zeta _{2t} \;  \zeta _{3t} \;  \right]^{}$, and the system matrices

\begin{eqnarray*}  Z_ t &  = &  \left[ \;  1 \;  1 \;  x_{t} \;  u_{1t} \;  u_{2t} \;  s_{1t} \;  s_{2t} \;  s_{3t} \;  \right] \nonumber \\ T &  = &  Diag \left[ 0, \;  1, \;  1, \;  1, \;  1, \;  1, \;  1, \;  1 \right] \nonumber \\ Q &  = &  Diag \left[ \sigma _{\epsilon }^{2},\;  \sigma _{\eta }^{2}, \;  0, \;  \sigma _{\xi }^{2},\;  \sigma _{\xi }^{2}, \;  \sigma _{\zeta }^{2}, \;  \sigma _{\zeta }^{2}, \;  \sigma _{\zeta }^{2} \right] \nonumber \end{eqnarray*}

Note that the regression coefficients are elements of the state vector and that the system vector $Z_ t$ is not time invariant. The distribution of the initial state vector $\alpha _1$ is diffuse, with $P_* = Diag \left[ \sigma _{\epsilon }^{2}, 0, 0, 0, 0, 0, 0, 0 \right] $ and $P_{\infty } = Diag \left[ 0 , 1, 1, 1, 1, 1, 1, 1 \right] $. The parameters of this model are the disturbance variances, $\sigma _{\epsilon }^{2}$, $\sigma _{\eta }^{2},$ $\sigma _{\xi }^{2},$ and $\sigma _{\zeta }^{2}$, which get estimated by maximizing the likelihood. The regression coefficients, time-invariant $\beta $ and time-varying $\kappa _{1t}, \kappa _{2t}, \gamma _{1t}, \gamma _{2t}$ and $\gamma _{3t}$, get implicitly estimated during the state estimation (smoothing).

Reporting Parameter Estimates for Random Regressors

If the random walk disturbance variance associated with a random regressor is held fixed at zero, then its coefficient is no longer time-varying. In the UCM procedure the random regressor parameter estimates are reported differently if the random walk disturbance variance associated with a random regressor is held fixed at zero. The following points explain how the parameter estimates are reported in the parameter estimates table and in the OUTEST= data set.

  • If the random walk disturbance variance associated with a random regressor is not held fixed, then its estimate is reported in the parameter estimates table and in the OUTEST= data set.

  • If more that one random regressor is specified in a RANDOMREG statement, then the first regressor in the list is used as a representative of the list while reporting the corresponding common variance parameter estimate.

  • If the random walk disturbance variance is held fixed at zero, then the parameter estimates table and the OUTEST= data set contain the corresponding regression parameter estimate rather than the variance parameter estimate.

  • Similar considerations apply in the case of the derived random regressors associated with a spline-regressor.

ARMA Irregular Component

The state space form for the irregular component that follows an ARMA(p,q)${\times }$(P,Q)$_{\mi {s}}$ model is described in this section. The notation for ARMA models is explained in the IRREGULAR statement. A number of alternate state space forms are possible in this case; the one given here is based on Jones (1980). With slight abuse of notation, let $p = p + s P$ denote the effective autoregressive order and $q = q + s Q$ denote the effective moving average order of the model. Similarly, let $\phi $ be the effective autoregressive polynomial and $\theta $ be the effective moving average polynomial in the backshift operator with coefficients $\phi _{1}, \;  \ldots , \;  \phi _{p}$ and $\theta _{1}, \;  \ldots , \;  \theta _{q}$, obtained by multiplying the respective nonseasonal and seasonal factors. Then, a random sequence $\epsilon _ t$ that follows an ARMA(p,q)${\times }$(P,Q)$_{\mi {s}}$ model with a white noise sequence $a_ t$ has a state space form with state vector of size $m = \max (p, q+1)$. The system matrices, which are time invariant, are as follows: $ Z = \left[1 \;  0 \;  \ldots \;  0 \right]$. The state transition matrix $T$, in a blocked form, is given by

\[  T = \left[ \begin{tabular}{cc} $0$   &  $I_{m-1}$   \\ $\phi _{m}$ \;  \ldots   &  $ \phi _1$   \end{tabular} \right]  \]

where $\phi _{i} = 0$ if $i > p$ and $I_{m-1}$ is an $(m-1)$ dimensional identity matrix. The covariance of the state disturbance matrix $ Q = \sigma ^{2} \psi \psi ^{} $ where $\sigma ^{2}$ is the variance of the white noise sequence $a_ t$ and the vector $\psi = \left[\psi _{0} \ldots \psi _{m-1} \right]^{} $ contains the first $m$ values of the impulse response function—that is, the first $m$ coefficients in the expansion of the ratio ${\theta } / {\phi }$. Since $\epsilon _ t$ is a stationary sequence, the initial state is nondiffuse and $P_{\infty } = 0$. The description of $P_{*}$, the covariance matrix of the initial state, is a little involved; the details are given in Jones (1980).

Models with Dependent Lags

The state space form of a UCM consisting of the lags of the dependent variable is quite different from the state space forms considered so far. Let us consider an example to illustrate this situation. Consider a model that has random walk trend, two simple time-invariant regressors, and that also includes a few—say, $k$—lags of the dependent variable. That is,

\begin{eqnarray*}  y_ t &  = &  \sum _{i=1}^{k} \phi _{i} y_{t-i} + \mu _{t} + \beta _1 x_{1t} + \beta _2 x_{2t} + \epsilon _{t} \nonumber \\ \mu _{t} &  = &  \mu _{t-1} + \eta _ t \nonumber \end{eqnarray*}

The state space form of this augmented model can be described in terms of the state space form of a model that has random walk trend with two simple time-invariant regressors. A superscript dagger ($\dagger $) has been added to distinguish the augmented model state space entities from the corresponding entities of the state space form of the random walk with predictors model. With this notation, the state vector of the augmented model $\alpha ^{\dagger }_{t} = \left[ \;  \alpha ^{}_{t} \;  y_ t \;  y_{t-1} \;  \ldots \;  y_{t-k+1} \; \right]^{} $ and the new state noise vector $\zeta ^{\dagger }_{t} = \left[ \;  \zeta ^{}_{t} \;  u_ t \;  0 \ldots \;  0 \; \right]^{} $, where $u_ t$ is the matrix product $ Z_{t} \zeta _ t$. Note that the length of the new state vector is $k + \mr {length} ( \alpha _{t} ) = k + 4$. The new system matrices, in block form, are

\[  Z^{\dagger }_{t} = \left[ \;  0 \;  0 \;  0 \;  0 \;  1 \;  \ldots \;  0 \;  \right] , \; \;  T^{\dagger }_{t} = \left[ \begin{tabular}{cccc} $T_{t}$   &  0   &  \ldots   &  0   \\ $Z_{t+1} T_{t}$   &  $\phi _1$   &  \ldots   &  $ \phi _ k$   \\ 0   &  $I_{k-1, k-1}$   & &  0   \end{tabular} \right]  \]

where $I_{k-1, k-1}$ is the $k-1$ dimensional identity matrix and

\[  Q^{\dagger }_{t} = \left[ \begin{tabular}{ccc} $Q_ t$   &  $Q_{t} Z^{}_{t}$   &  0   \\ $Z_{t} Q_{t}$   &  $ Z_{t} Q_{t} Z^{}_{t} $  &  0   \\ 0   &  0   &  0   \end{tabular} \right]  \]

Note that the $T$ and $Q$ matrices of the random walk with predictors model are time invariant, and in the expressions above their time indices are kept because they illustrate the pattern for more general models. The initial state vector is diffuse, with

\[  P^{\dagger }_{*} = \left[ \begin{tabular}{cc} $P_{*}$   &  0   \\ 0   &  0   \end{tabular} \right] , \; \;  P^{\dagger }_{\infty } = \left[ \begin{tabular}{cc} $P_{\infty }$   &  0   \\ 0   &  $I_{k, k}$   \end{tabular} \right] \; \;   \]

The parameters of this model are the disturbance variances $\sigma _{\epsilon }^{2}$ and $\sigma _{\eta }^{2}$, the lag coefficients $\phi _{1} , \phi _{2} , \ldots , \phi _{k}$, and the regression coefficients $\beta _1$ and $\beta _2$. As before, the regression coefficients get estimated during the state smoothing, and the other parameters are estimated by maximizing the likelihood.