The STATESPACE Procedure

Preliminary Autoregressive Models

After computing the sample autocovariance matrices, PROC STATESPACE fits a sequence of vector autoregressive models. These preliminary autoregressive models are used to estimate the autoregressive order of the process and limit the order of the autocovariances considered in the state vector selection process.

Yule-Walker Equations for Forward and Backward Models

Unlike a univariate autoregressive model, a multivariate autoregressive model has different forms, depending on whether the present observation is being predicted from the past observations or from the future observations.

Let ${\mb {x}_{t}}$ be the r-component stationary time series given by the VAR statement after differencing and subtracting the vector of sample means. (If the NOCENTER option is specified, the mean is not subtracted.) Let n be the number of observations of ${\mb {x}_{t}}$ from the input data set.

Let ${\mb {e}_{t}}$ be a vector white noise sequence with mean vector 0 and variance matrix ${\bSigma _{p}}$, and let ${\mb {n}_{t}}$ be a vector white noise sequence with mean vector 0 and variance matrix ${\bOmega _{p}}$. Let p be the order of the vector autoregressive model for ${\mb {x}_{t}}$.

The forward autoregressive form based on the past observations is written as follows:

\[  \mb {x}_{t}=\sum _{i=1}^{p}{\bPhi ^{p}_{i}\mb {x}_{t-i}}+\mb {e}_{t}  \]

The backward autoregressive form based on the future observations is written as follows:

\[  \mb {x}_{t}=\sum _{i=1}^{p}{\bPsi ^{p}_{i}\mb {x}_{t+i}}+\mb {n}_{t}  \]

Letting $E$ denote the expected value operator, the autocovariance sequence for the ${\mb {x}_{t}}$ series, ${\bGamma _{i}}$, is

\[  \bGamma _{i} = {E} \mb {x}_{t} \mb {x} ^{{\prime }}_{t-i}  \]

The Yule-Walker equations for the autoregressive model that matches the first p elements of the autocovariance sequence are

\begin{eqnarray*}  \left[\begin{matrix}  \bGamma _{0}   &  \bGamma _{1}   &  {\cdots }   &  \bGamma _{p-1}   \\ \bGamma ^{{\prime }}_{1}   &  \bGamma _{0}   &  {\cdots }   &  \bGamma _{p-2}   \\ {\vdots }   &  {\vdots }   & &  {\vdots }   \\ \bGamma ^{{\prime }}_{p-1}   &  \bGamma ^{{\prime }}_{p-2}   &  {\cdots }   &  \bGamma _{0}   \\ \end{matrix}\right] \left[\begin{matrix}  \bPhi ^{p}_{1}   \\ \bPhi ^{p}_{2}   \\ {\vdots }   \\ \bPhi ^{p}_{p}   \\ \end{matrix} \right] = \left[\begin{matrix}  \bGamma _{1}   \\ \bGamma _{2}   \\ {\vdots }   \\ \bGamma _{p} \nonumber   \end{matrix} \right] \end{eqnarray*}

and

\begin{eqnarray*}  \left[\begin{matrix}  \bGamma _{0}   &  \bGamma ’_{1}   &  {\cdots }   &  \bGamma ’_{p-1}   \\ \bGamma _{1}   &  \bGamma _{0}   &  {\cdots }   &  \bGamma ’_{p-2}   \\ {\vdots }   &  {\vdots }   & &  {\vdots }   \\ \bGamma _{p-1}   &  \bGamma _{p-2}   &  {\cdots }   &  \bGamma _{0}   \\ \end{matrix} \right] \left[\begin{matrix}  \bPsi ^{p}_{1}   \\ \bPsi ^{p}_{2}   \\ {\vdots }   \\ \bPsi ^{p}_{p}   \\ \end{matrix} \right] = \left[\begin{matrix}  \bGamma _{1}’   \\ \bGamma _{2}’   \\ {\vdots }   \\ \bGamma _{p}’ \nonumber   \end{matrix} \right] \end{eqnarray*}

Here ${\bPhi ^{p}_{i}}$ are the coefficient matrices for the past observation form of the vector autoregressive model, and ${ \bPsi ^{p}_{i}}$ are the coefficient matrices for the future observation form. More information about the Yule-Walker equations in the multivariate setting can be found in Whittle (1963); Ansley and Newbold (1979).

The innovation variance matrices for the two forms can be written as follows:

\[  {\bSigma }_{p} = \bGamma _{0}- \sum _{i=1}^{p}{\bPhi ^{p}_{i} \bGamma ^{{\prime }}_{i} }  \]
\[  \bOmega _{p} = \bGamma _{0} - \sum _{i=1}^{p} { \bPsi ^{p}_{i}\bGamma _{i} }  \]

The autoregressive models are fit to the data by using the preceding Yule-Walker equations with ${\bGamma _{i}}$ replaced by the sample covariance sequence $\mb {C_{i}} $. The covariance matrices are calculated as

\[  \mb {C}_{i} = \frac{1}{N-1} \sum _{t=i+1}^{N}{\mb {x}_{t}\mb {x}_{t-i}^{\prime } }  \]

Let ${\widehat{\bPhi }_{p}}$, ${\widehat{\bPsi }_{p}}$, ${\widehat{\bSigma }_{p}}$, and ${\widehat{\bOmega }_{p}}$ represent the Yule-Walker estimates of ${\bPhi _{p}}$, ${\bPsi _{p}}$, ${\bSigma _{p}}$, and ${\bOmega _{p}}$, respectively. These matrices are written to an output data set when the OUTAR= option is specified.

When the PRINTOUT=LONG option is specified, the sequence of matrices ${\widehat{\bSigma }_{p}}$ and the corresponding correlation matrices are printed. The sequence of matrices ${\widehat{\bSigma }_{p}}$ is used to compute Akaike information criteria for selection of the autoregressive order of the process.

Akaike Information Criterion

The Akaike information criterion (AIC) is defined as –2(maximum of log likelihood )+2(number of parameters). Since the vector autoregressive models are estimates from the Yule-Walker equations, not by maximum likelihood, the exact likelihood values are not available for computing the AIC. However, for the vector autoregressive model the maximum of the log likelihood can be approximated as

\[  {\ln }( L ) {\approx } -\frac{n}{2} {\ln }( {|\widehat{\bSigma }_{p}|} )  \]

Thus, the AIC for the order p model is computed as

\[  AIC_{p} = n {\ln }( {|\widehat{\bSigma }_{p}|} ) + 2pr^{2}  \]

You can use the printed AIC array to compute a likelihood ratio test of the autoregressive order. The log-likelihood ratio test statistic for testing the order p model against the order ${p-1}$ model is

\[  - n {\ln }( {|\widehat{\bSigma }_{p}|} ) + n {\ln }( {|\widehat{\bSigma }_{p-1}|} )  \]

This quantity is asymptotically distributed as a ${{\chi }^{2}}$ with ${\mi {r} ^{2}}$ degrees of freedom if the series is autoregressive of order ${p-1}$. It can be computed from the AIC array as

\[  AIC_{p-1}-AIC_{p}+2r^{2}  \]

You can evaluate the significance of these test statistics with the PROBCHI function in a SAS DATA step or with a ${{\chi }^{2}}$ table.

Determining the Autoregressive Order

Although the autoregressive models can be used for prediction, their primary value is to aid in the selection of a suitable portion of the sample covariance matrix for use in computing canonical correlations. If the multivariate time series ${\mb {x}_{t}}$ is of autoregressive order p, then the vector of past values to lag p is considered to contain essentially all the information relevant for prediction of future values of the time series.

By default, PROC STATESPACE selects the order p that produces the autoregressive model with the smallest ${AIC_{p}}$. If the value p for the minimum ${AIC_{p}}$ is less than the value of the PASTMIN= option, then p is set to the PASTMIN= value. Alternatively, you can use the ARMAX= and PASTMIN= options to force PROC STATESPACE to use an order you select.

Significance Limits for Partial Autocorrelations

The STATESPACE procedure prints a schematic representation of the partial autocorrelation matrices that indicates which partial autocorrelations are significantly greater than or significantly less than 0. Figure 28.11 shows an example of this table.

Figure 28.11: Significant Partial Autocorrelations

Schematic Representation of Partial
Autocorrelations
Name/Lag 1 2 3 4 5 6 7 8 9 10
x ++ +. .. .. .. .. .. .. .. ..
y ++ .. .. .. .. .. .. .. .. ..
+ is > 2*std error,  - is < -2*std error,  . is between


The partial autocorrelations are from the sample partial autoregressive matrices ${ \widehat{\bPhi }^{p}_{p}}$. The standard errors used for the significance limits of the partial autocorrelations are computed from the sequence of matrices ${\bSigma _{p}}$ and ${\bOmega _{p}}$.

Under the assumption that the observed series arises from an autoregressive process of order ${p-1}$, the pth sample partial autoregressive matrix ${ \widehat{\bPhi }^{p}_{p}}$ has an asymptotic variance matrix ${\frac{1}{n} \bOmega ^{-1}_{p}{\otimes }\bSigma _{p}}$.

The significance limits for ${ \widehat{\bPhi }^{p}_{p}}$ used in the schematic plot of the sample partial autoregressive sequence are derived by replacing ${\bOmega _{p}}$ and ${\bSigma _{p}}$ with their sample estimators to produce the variance estimate, as follows:

\begin{eqnarray*}  \widehat{{Var}}\left( \widehat{\bPhi }^{p}_{p} \right) = \left(\frac{1}{n-rp}\right) \widehat{\bOmega }^{-1}_{p}{\otimes }\widehat{\bSigma }_{p} \nonumber \end{eqnarray*}