The VARMAX Procedure

Vector Error Correction Modeling

This section discusses the implication of cointegration for the autoregressive representation.

Consider the vector autoregressive process that has Gaussian errors defined by

\begin{eqnarray*} \mb{y} _ t = \sum _{i=1}^ p\Phi _ i\mb{y} _{t-i} + \bepsilon _ t \end{eqnarray*}

or

\begin{eqnarray*} \Phi (B) \mb{y} _ t = \bepsilon _ t \end{eqnarray*}

where the initial values, $\mb{y} _{-p+1},\ldots ,\mb{y} _0$, are fixed and $\bepsilon _ t \sim N(0,\Sigma )$. The AR operator $\Phi (B)$ can be re-expressed as

\[ \Phi (B) = \Phi ^*(B)(1-B)+\Phi (1)B \]

where

\[ \Phi (1)= I_ k-\Phi _{1}-\Phi _{2}-\cdots -\Phi _{p}, \Phi ^*(B)=I_ k-\sum _{i=1}^{p-1}\Phi ^*_ iB^ i, \Phi ^*_ i= - \sum _{j=i+1}^ p \Phi _ j \]

The vector error correction model (VECM), also called the vector equilibrium correction model, is defined as

\[ \Phi ^*(B)(1-B)\mb{y} _ t=\balpha \bbeta ’\mb{y} _{t-1} +\bepsilon _ t \]

or

\[ \Delta \mb{y} _ t = \balpha \bbeta ’\mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} + \bepsilon _ t \]

where $\balpha \bbeta ’ = -\Phi (1)$.

Granger Representation Theorem

Engle and Granger (1987) define

\[ \Pi (z) \equiv (1-z)I_ k - \balpha \bbeta ’ z - \sum _{i=1}^{p-1}{\Phi ^*_ i (1-z)z^ i} \]

and the following assumptions hold:

  1. $|\Pi (z)| = 0 \Rightarrow |z|>1$ or $z=1$.

  2. The number of unit roots, $z=1$, is exactly $k-r$.

  3. $\balpha $ and $\bbeta $ are $k \times r$ matrices, and their ranks are both r.

Then $y_ t$ has the representation

\[ y_ t = C \sum _{i=1}^{t}{\bepsilon _ i} + C^*(B)\bepsilon _ t + y_0^* \]

where the Granger representation coefficient, C, is

\[ C = \bbeta _{\bot } \left[ \balpha ’_{\bot } \Phi (1) \bbeta _{\bot } \right]^{-1} \balpha ’_{\bot } \]

where the full-rank $k \times r$ matrix $\bbeta _{\bot }$ is orthogonal to $\bbeta $ and the full-rank $k \times r$ matrix $\balpha _{\bot }$ is orthogonal to $\balpha $. $C^*(B)\bepsilon _ t = \sum _{j=1}^{\infty }{C_ j^*\bepsilon _{t-j}}$ is an $I(0)$ process, and $y_0^*$ depends on the initial values.

The Granger representation coefficient C can be defined only when the $(k-r) \times (k-r)$ matrix $\balpha ’_{\bot } \Phi (1) \bbeta _{\bot }$ is invertible.

One motivation for the VECM(p) form is to consider the relation $\bbeta ’\mb{y} _{t} = \mb{c} $ as defining the underlying economic relations. Assume that agents react to the disequilibrium error $\bbeta ’\mb{y} _{t} - \mb{c} $ through the adjustment coefficient $\balpha $ to restore equilibrium. The cointegrating vector, $\bbeta $, is sometimes called the long-run parameter.

Consider a vector error correction model that has a deterministic term, $D_ t$, which can contain a constant, a linear trend, and seasonal dummy variables. Exogenous variables can also be included in the model. The model has the form

\begin{eqnarray*} \Delta \mb{y} _ t = \Pi \mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i}+ A D_ t + \sum _{i=0}^{s}\Theta ^*_ i\mb{x} _{t-i} + \bepsilon _ t \end{eqnarray*}

where $\Pi = \balpha \bbeta ’$.

The alternative vector error correction representation considers the error correction term at lag $t-p$ and is written as

\[ \Delta \mb{y} _ t=\sum _{i=1}^{p-1}\Phi ^{\sharp }_ i\Delta \mb{y} _{t-i} +\Pi ^{\sharp } \mb{y} _{t-p} + A D_ t +\sum _{i=0}^{s}\Theta ^*_ i\mb{x} _{t-i} +\bepsilon _ t \]

If the matrix $\Pi $ has a full rank ($r=k$), all components of $\mb{y} _ t$ are $I(0)$. On the other hand, $\mb{y} _ t$ are stationary in difference if $rank(\Pi )=0$. When the rank of the matrix $\Pi $ is $r < k$, there are $k-r$ linear combinations that are nonstationary and r stationary cointegrating relations. Note that the linearly independent vector $\mb{z} _ t=\bbeta ’\mb{y} _ t$ is stationary and this transformation is not unique unless $r=1$. There does not exist a unique cointegrating matrix $\bbeta $ because the coefficient matrix $\Pi $ can also be decomposed as

\begin{eqnarray*} \Pi = \balpha MM^{-1}\bbeta ’ = \balpha ^{*}\bbeta ^{*'} \end{eqnarray*}

where M is an $r\times r$ nonsingular matrix.

Test for Cointegration

The cointegration rank test determines the linearly independent columns of $\Pi $. Johansen and Juselius proposed the cointegration rank test by using the reduced rank regression (Johansen 1988, 1995b; Johansen and Juselius 1990).

Different Specifications of Deterministic Trends

When you construct the VECM(p) form from the VAR(p) model, the deterministic terms in the VECM(p) form can differ from those in the VAR(p) model. When there are deterministic cointegrated relationships among variables, deterministic terms in the VAR(p) model are not present in the VECM(p) form. On the other hand, if there are stochastic cointegrated relationships in the VAR(p) model, deterministic terms appear in the VECM(p) form via the error correction term or as an independent term in the VECM(p) form. There are five different specifications of deterministic trends in the VECM(p) form.

  • Case 1: There is no separate drift in the VECM(p) form.

    \[ \Delta \mb{y} _ t = \balpha \bbeta ’\mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} +\bepsilon _ t \]
  • Case 2: There is no separate drift in the VECM(p) form, but a constant enters only via the error correction term.

    \[ \Delta \mb{y} _ t = \balpha (\bbeta ’, \bbeta _0)(\mb{y} _{t-1}’,1)’ + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} + \bepsilon _ t \]
  • Case 3: There is a separate drift and no separate linear trend in the VECM(p) form.

    \[ \Delta \mb{y} _ t = \balpha \bbeta ’\mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} + \bdelta _0 + \bepsilon _ t \]
  • Case 4: There is a separate drift and no separate linear trend in the VECM(p) form, but a linear trend enters only via the error correction term.

    \[ \Delta \mb{y} _ t = \balpha (\bbeta ’, \bbeta _1)(\mb{y} _{t-1}’,t)’ + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} + \bdelta _0 + \bepsilon _ t \]
  • Case 5: There is a separate linear trend in the VECM(p) form.

    \[ \Delta \mb{y} _ t = \balpha \bbeta ’\mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} + \bdelta _0 + \bdelta _1t + \bepsilon _ t \]

First, focus on Cases 1, 3, and 5 to test the null hypothesis that there are at most r cointegrating vectors. Let

\begin{eqnarray*} Z_{0t}& =& \Delta \mb{y} _ t \\ Z_{1t}& =& \mb{y} _{t-1} \\ Z_{2t}& =& [\Delta \mb{y} _{t-1}’,\ldots ,\Delta \mb{y} _{t-p+1}’,D_ t]’\\ Z_{0} & =& [Z_{01}, \ldots , Z_{0T}]’ \\ Z_{1} & =& [Z_{11}, \ldots , Z_{1T}]’ \\ Z_{2} & =& [Z_{21}, \ldots , Z_{2T}]’ \end{eqnarray*}

where $D_ t$ can be empty for Case 1, 1 for Case 3, and $(1,t)$ for Case 5.

In Case 2, $Z_{1t}$ and $Z_{2t}$ are defined as

\begin{eqnarray*} Z_{1t}& =& [ \mb{y} _{t-1}’, 1]’ \\ Z_{2t}& =& [\Delta \mb{y} _{t-1}’,\ldots ,\Delta \mb{y} _{t-p+1}’]’\\ \end{eqnarray*}

In Case 4, $Z_{1t}$ and $Z_{2t}$ are defined as

\begin{eqnarray*} Z_{1t}& =& [ \mb{y} _{t-1}’, t]’ \\ Z_{2t}& =& [\Delta \mb{y} _{t-1}’,\ldots ,\Delta \mb{y} _{t-p+1}’, 1]’\\ \end{eqnarray*}

Let $\Psi $ be the matrix of parameters consisting of $\Phi ^{*}_1$, …, $\Phi ^{*}_{p-1}$, A, and $\Theta ^*_0$, …, $\Theta ^{*}_ s$, where parameter A corresponds with the regressors $D_ t$. Then the VECM(p) form is rewritten in these variables as

\[ Z_{0t}=\balpha \bbeta ’ Z_{1t} +\Psi Z_{2t} +\bepsilon _ t \]

The log-likelihood function is given by

\begin{eqnarray*} \ell & =& - \frac{kT}{2} \log 2\pi -\frac{T}{2} \log |\Sigma | \\ & & - \frac{1}{2} \sum _{t=1}^ T(Z_{0t} - \balpha \bbeta ’ Z_{1t} -\Psi Z_{2t})’\Sigma ^{-1} (Z_{0t} -\balpha \bbeta ’ Z_{1t} -\Psi Z_{2t}) \end{eqnarray*}

The residuals, $R_{0t}$ and $R_{1t}$, are obtained by regressing $Z_{0t}$ and $Z_{1t}$ on $Z_{2t}$, respectively. The regression equation of residuals is

\[ R_{0t} = \balpha \bbeta ’ R_{1t} + \hat{ \bepsilon }_ t \]

The crossproducts matrices are computed

\[ S_{ij} = \frac{1}{T}\sum _{t=1}^{T}R_{it}R_{jt}’,~ ~ i,j=0,1 \]

Then the maximum likelihood estimator for $\bbeta $ is obtained from the eigenvectors that correspond to the r largest eigenvalues of the following equation:

\[ |\lambda S_{11} - S_{10}S_{00}^{-1}S_{01}| = 0 \]

The eigenvalues of the preceding equation are squared canonical correlations between $R_{0t}$ and $R_{1t}$, and the eigenvectors that correspond to the r largest eigenvalues are the r linear combinations of $\mb{y} _{t-1}$, which have the largest squared partial correlations with the stationary process $\Delta \mb{y} _{t}$ after correcting for lags and deterministic terms. Such an analysis calls for a reduced rank regression of $\Delta \mb{y} _{t}$ on $\mb{y} _{t-1}$ corrected for $(\Delta \mb{y} _{t-1},\ldots ,\Delta \mb{y} _{t-p+1},D_ t)$, as discussed by Anderson (1951). Johansen (1988) suggests two test statistics to test the null hypothesis that there are at most r cointegrating vectors

\[ \mbox{H}_0: \lambda _ i=0 \mr{~ ~ for~ ~ } i=r+1,\ldots ,k \]

Trace Test

The trace statistic for testing the null hypothesis that there are at most r cointegrating vectors is as follows:

\[ \lambda _{trace} = -T\sum _{i=r+1}^{k}\log (1-\lambda _ i) \]

The asymptotic distribution of this statistic is given by

\[ tr\left\{ \int _0^1 (dW){\tilde W}’ \left(\int _0^1 {\tilde W}{\tilde W}’dr\right)^{-1}\int _0^1 {\tilde W}(dW)’ \right\} \]

where $tr(A)$ is the trace of a matrix A, W is the $k-r$ dimensional Brownian motion, and $\tilde W$ is the Brownian motion itself, or the demeaned or detrended Brownian motion according to the different specifications of deterministic trends in the vector error correction model.

Maximum Eigenvalue Test

The maximum eigenvalue statistic for testing the null hypothesis that there are at most r cointegrating vectors is as follows:

\[ \lambda _{max} = -T\log (1-\lambda _{r+1}) \]

The asymptotic distribution of this statistic is given by

\[ max\{ \int _0^1 (dW){\tilde W}’ (\int _0^1 {\tilde W}{\tilde W}’dr)^{-1}\int _0^1 {\tilde W}(dW)’ \} \]

where $max(A)$ is the maximum eigenvalue of a matrix A. Osterwald-Lenum (1992) provided detailed tables of the critical values of these statistics.

The following statements use the JOHANSEN option to compute the Johansen cointegration rank trace test of integrated order 1:

proc varmax data=simul2;
   model y1 y2 / p=2 cointtest=(johansen=(normalize=y1));
run;

Figure 42.68 shows the output based on the model specified in the MODEL statement. An intercept term is assumed. In the "Cointegration Rank Test Using Trace" table, the column Drift in ECM indicates that there is no separate drift in the error correction model, and the column Drift in Process indicates that the process has a constant drift before differencing. The "Cointegration Rank Test Using Trace" table shows the trace statistics and p-values based on Case 3, and the "Cointegration Rank Test Using Trace under Restriction" table shows the trace statistics and p-values based on Case 2. For a specified significance level, such as 5%, the output indicates that the null hypothesis that the series are not cointegrated (H0: Rank = 0) can be rejected, because the p-values for both Case 2 and Case 3 are less than 0.05. The output also shows that the null hypothesis that the series are cointegrated with rank 1 (H0: Rank = 1) cannot be rejected for either Case 2 or Case 3, because the p-values for these tests are both greater than 0.05.

Figure 42.68: Cointegration Rank Test (COINTTEST=(JOHANSEN=) Option)

The VARMAX Procedure

Cointegration Rank Test Using Trace
H0:
Rank=r
H1:
Rank>r
Eigenvalue Trace Pr > Trace Drift in ECM Drift in Process
0 0 0.4644 61.7522 <.0001 Constant Linear
1 1 0.0056 0.5552 0.4559    

Cointegration Rank Test Using Trace Under Restriction
H0:
Rank=r
H1:
Rank>r
Eigenvalue Trace Pr > Trace Drift in ECM Drift in Process
0 0 0.5209 76.3788 <.0001 Constant Constant
1 1 0.0426 4.2680 0.3741    



Figure 42.69 shows which result, either Case 2 (the hypothesis H0) or Case 3 (the hypothesis H1), is appropriate depending on the significance level. Since the cointegration rank is chosen to be 1 by the result in Figure 42.68, look at the last row that corresponds to rank=1. Since the p-value is 0.054, the Case 2 cannot be rejected at the significance level 5%, but it can be rejected at the significance level 10%. For modeling of the two Case 2 and Case 3, see Figure 42.72 and Figure 42.73.

Figure 42.69: Cointegration Rank Test, Continued

Hypothesis of the Restriction
Hypothesis Drift in ECM Drift in Process
H0(Case 2) Constant Constant
H1(Case 3) Constant Linear

Hypothesis Test of the Restriction
Rank Eigenvalue Restricted
Eigenvalue
DF Chi-Square Pr > ChiSq
0 0.4644 0.5209 2 14.63 0.0007
1 0.0056 0.0426 1 3.71 0.0540



Figure 42.70 shows the estimates of long-run parameter (Beta) and adjustment coefficients (Alpha) based on Case 3.

Figure 42.70: Cointegration Rank Test, Continued

Beta
Variable 1 2
y1 1.00000 1.00000
y2 -2.04869 -0.02854

Alpha
Variable 1 2
y1 -0.46421 -0.00502
y2 0.17535 -0.01275



Using the NORMALIZE= option, the first row of the "Beta" table has 1. Considering that the cointegration rank is 1, the long-run relationship of the series is

\begin{eqnarray*} {\bbeta }’y_ t & =& \left[ \begin{array}{rr} 1 & -2.04869 \\ \end{array} \right] \left[ \begin{array}{r} y_1 \\ y_2 \\ \end{array} \right] \\ & =& y_{1t} - 2.04869 y_{2t} \\ y_{1t} & =& 2.04869 y_{2t} \end{eqnarray*}

Figure 42.71 shows the estimates of long-run parameter (Beta) and adjustment coefficients (Alpha) based on Case 2.

Figure 42.71: Cointegration Rank Test, Continued

Beta Under Restriction
Variable 1 2
y1 1.00000 1.00000
y2 -2.04366 -2.75773
1 6.75919 101.37051

Alpha Under Restriction
Variable 1 2
y1 -0.48015 0.01091
y2 0.12538 0.03722



Considering that the cointegration rank is 1, the long-run relationship of the series is

\begin{eqnarray*} {\bbeta }’y_ t & =& \left[ \begin{array}{rrr} 1 & -2.04366 & 6.75919 \\ \end{array} \right] \left[ \begin{array}{r} y_1 \\ y_2 \\ 1 \end{array} \right] \\ & =& y_{1t} - 2.04366~ y_{2t} + 6.75919 \\ y_{1t} & =& 2.04366~ y_{2t} - 6.75919 \end{eqnarray*}

Estimation of Vector Error Correction Model

The preceding log-likelihood function is maximized for

\begin{eqnarray*} \hat{\bbeta } & =& S_{11}^{-1/2} [v_1,\ldots ,v_ r] \\ \hat{\balpha } & =& S_{01}\hat{\bbeta }(\hat{\bbeta }’ S_{11}\hat{\bbeta })^{-1} \\ \hat\Pi & =& \hat{\balpha } \hat{\bbeta }’ \\ \hat\Psi ’ & =& (Z_{2}’Z_{2})^{-1} Z_{2}’(Z_{0} - Z_{1} \hat\Pi ’) \\ \hat\Sigma & =& (Z_{0} - Z_{2} \hat\Psi ’ - Z_{1} \hat\Pi ’)’ (Z_{0} - Z_{2} \hat\Psi ’ - Z_{1} \hat\Pi ’)/T \end{eqnarray*}

The estimators of the orthogonal complements of $\balpha $ and $\bbeta $ are

\[ \hat{\bbeta }_{\bot } = S_{11} [v_{r+1},\ldots ,v_{k}] \]

and

\[ \hat{\balpha }_{\bot } = S_{00}^{-1} S_{01} [v_{r+1},\ldots ,v_{k}] \]

Let ${\vartheta }$ denote the parameter vector $(\mr{vec}(\balpha ,\Psi )’,\mr{vech}(\Sigma )’)’$. The covariance of parameter estimates $\hat{\vartheta }$ is obtained as the inverse of the negative Hessian matrix $H \equiv \frac{\partial ^2 \ell }{\partial \vartheta \partial \vartheta '}$. Because $\hat{\Pi }=\hat{\balpha }\hat{\bbeta }’$, the variance of $\hat{\Pi }$ and the covariance between $\hat{\Pi }$ and $\hat{\vartheta }$ are calculated as follows:

\[ \mr{cov}(\mr{vec}(\hat{\Pi }), \mr{vec}(\hat{\Pi })) = (\hat{\bbeta } \otimes I_ k) \mr{cov}(\mr{vec}(\hat{\balpha }), \mr{vec}(\hat{\balpha })) (\hat{\bbeta } \otimes I_ k)’ \]
\[ \mr{cov}(\mr{vec}(\hat{\Pi }), \hat{\vartheta }) = (\hat{\bbeta } \otimes I_ k) \mr{cov}(\mr{vec}(\hat{\balpha }), \hat{\vartheta }) \]

For Case 2 (Case 4), because the coefficient vector $\hat{\bdelta }_0$ ($\hat{\bdelta }_1$) for the constant term (the linear trend term) is the product of $\hat{\balpha }$ and $\hat{\bbeta }_0$ ($\hat{\bbeta }_1$), the variance of $\hat{\bdelta }_0$ ($\hat{\bdelta }_1$) and the covariance between $\hat{\bdelta }_0$ ($\hat{\bdelta }_1$) and $\hat{\vartheta }$ are calculated as follows:

\[ \mr{cov}(\hat{\bdelta }_ i, \hat{\bdelta }_ i) = (\hat{\bbeta }_ i’ \otimes I_ k) \mr{cov}(\mr{vec}(\hat{\balpha }), \mr{vec}(\hat{\balpha })) (\hat{\bbeta }_ i’ \otimes I_ k)’,~ ~ i=0~ \mr{or}~ 1 \]
\[ \mr{cov}(\hat{\bdelta }_ i, \hat{\vartheta }) = (\hat{\bbeta }_ i’ \otimes I_ k) \mr{cov}(\mr{vec}(\hat{\balpha }), \hat{\vartheta }),~ ~ i=0~ \mr{or}~ 1 \]

The following statements are examples of fitting the five different cases of the vector error correction models mentioned in the previous section.

For fitting Case 1,

model y1 y2 / p=2 noint;
cointeg rank=1 normalize=y1;

For fitting Case 2,

model y1 y2 / p=2;
cointeg rank=1 normalize=y1 ectrend;

For fitting Case 3,

model y1 y2 / p=2;
cointeg rank=1 normalize=y1;

For fitting Case 4,

model y1 y2 / p=2 trend=linear;
cointeg rank=1 normalize=y1 ectrend;

For fitting Case 5,

model y1 y2 / p=2 trend=linear;
cointeg rank=1 normalize=y1;

In the previous example, the output from the COINTTEST=(JOHANSEN) option shown in Figure 42.69 indicates that you can fit the model by using either Case 2 or Case 3 because the test of the restriction was not significant at the 0.05 level, but was significant at the 0.10 level. Following both models are fit to show the differences in the displayed output. Figure 42.72 is for Case 2, and Figure 42.73 is for Case 3.

For Case 2,

proc varmax data=simul2;
   model y1 y2 / p=2 print=(estimates);
   cointeg rank=1 normalize=y1 ectrend;
run;

Figure 42.72: Parameter Estimation with the ECTREND Option

The VARMAX Procedure

Parameter Alpha * Beta' Estimates
Variable y1 y2 1
y1 -0.48015 0.98126 -3.24543
y2 0.12538 -0.25624 0.84748

AR Coefficients of Differenced Lag
DIF Lag Variable y1 y2
1 y1 -0.72759 -0.77463
  y2 0.38982 -0.55173

Model Parameter Estimates
Equation Parameter Estimate Standard
Error
t Value Pr > |t| Variable
D_y1 CONST1 -3.24543 0.33022 -9.83 <.0001 1, EC
  AR1_1_1 -0.48015 0.04886 -9.83 <.0001 y1(t-1)
  AR1_1_2 0.98126 0.09984 9.83 <.0001 y2(t-1)
  AR2_1_1 -0.72759 0.04623 -15.74 <.0001 D_y1(t-1)
  AR2_1_2 -0.77463 0.04978 -15.56 <.0001 D_y2(t-1)
D_y2 CONST2 0.84748 0.35394 2.39 0.0187 1, EC
  AR1_2_1 0.12538 0.05236 2.39 0.0187 y1(t-1)
  AR1_2_2 -0.25624 0.10702 -2.39 0.0187 y2(t-1)
  AR2_2_1 0.38982 0.04955 7.87 <.0001 D_y1(t-1)
  AR2_2_2 -0.55173 0.05336 -10.34 <.0001 D_y2(t-1)



Figure 42.72 can be reported as follows:

\begin{eqnarray*} \Delta \mb{y} _ t & =& \left[ \begin{array}{rrr} -0.48015 & 0.98126 & -3.24543 \\ 0.12538 & -0.25624& 0.84748 \end{array} \right] \left[ \begin{array}{c} y_{1,t-1} \\ y_{2,t-1} \\ 1 \end{array} \right] \\ & & + \left[ \begin{array}{rr} -0.72759 & -0.77463 \\ 0.38982 & -0.55173 \end{array} \right] \Delta \mb{y} _{t-1} + \bepsilon _ t \end{eqnarray*}

The keyword "EC" in the "Model Parameter Estimates" table means that the ECTREND option is used for fitting the model.

For fitting Case 3,

proc varmax data=simul2;
   model y1 y2 / p=2 print=(estimates);
   cointeg rank=1 normalize=y1;
run;

Figure 42.73: Parameter Estimation without the ECTREND Option

The VARMAX Procedure

Parameter Alpha * Beta' Estimates
Variable y1 y2
y1 -0.46421 0.95103
y2 0.17535 -0.35923

AR Coefficients of Differenced Lag
DIF Lag Variable y1 y2
1 y1 -0.74052 -0.76305
  y2 0.34820 -0.51194

Model Parameter Estimates
Equation Parameter Estimate Standard
Error
t Value Pr > |t| Variable
D_y1 CONST1 -2.60825 1.32398 -1.97 0.0518 1
  AR1_1_1 -0.46421 0.05474 -8.48 <.0001 y1(t-1)
  AR1_1_2 0.95103 0.11215 8.48 <.0001 y2(t-1)
  AR2_1_1 -0.74052 0.05060 -14.63 <.0001 D_y1(t-1)
  AR2_1_2 -0.76305 0.05352 -14.26 <.0001 D_y2(t-1)
D_y2 CONST2 3.43005 1.39587 2.46 0.0159 1
  AR1_2_1 0.17535 0.05771 3.04 0.0031 y1(t-1)
  AR1_2_2 -0.35923 0.11824 -3.04 0.0031 y2(t-1)
  AR2_2_1 0.34820 0.05335 6.53 <.0001 D_y1(t-1)
  AR2_2_2 -0.51194 0.05643 -9.07 <.0001 D_y2(t-1)



Figure 42.73 can be reported as follows:

\begin{eqnarray*} \Delta \mb{y} _ t & =& \left[ \begin{array}{rr} -0.46421 & 0.95103 \\ 0.17535 & -0.35293 \end{array} \right] \mb{y} _{t-1} + \left[ \begin{array}{rr} -0.74052 & -0.76305 \\ 0.34820 & -0.51194 \end{array} \right] \Delta \mb{y} _{t-1} \\ & & + \left[ \begin{array}{r} -2.60825 \\ 3.43005 \end{array} \right] + \bepsilon _ t \end{eqnarray*}

A Test for the Long-run Relations

Consider the example with the variables $m_ t$ log real money, $y_ t$ log real income, $i^ d_ t$ deposit interest rate, and $i^ b_ t$ bond interest rate. It seems a natural hypothesis that in the long-run relation, money and income have equal coefficients with opposite signs. This can be formulated as the hypothesis that the cointegrated relation contains only $m_ t$ and $y_ t$ through $m_ t - y_ t$. For the analysis, you can express these restrictions in the parameterization of $\bH $ such that $\bbeta = H\phi $, where $\bH $ is a known $k\times s$ matrix and $\psi $ is the $s\times r ( r\leq s < k)$ parameter matrix to be estimated. For this example, $\bH $ is given by

\[ H = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right] \]

Restriction $H_0\colon \bbeta = H\phi $

When the linear restriction $\bbeta = H\phi $ is given, it implies that the same restrictions are imposed on all cointegrating vectors. You obtain the maximum likelihood estimator of $\bbeta $ by reduced rank regression of $\Delta \mb{y} _ t$ on $H\mb{y} _{t-1}$ corrected for $(\Delta \mb{y} _{t-1},\ldots ,\Delta \mb{y} _{t-p+1}, D_ t)$, solving the following equation

\begin{eqnarray*} |\rho H’S_{11}H - H’S_{10}S^{-1}_{00}S_{01}H| = 0 \end{eqnarray*}

for the eigenvalues $1>\rho _1>\cdots >\rho _ s>0$ and eigenvectors $(v_1,\ldots ,v_ s)$, $S_{ij}$ given in the preceding section. Then choose $\hat\phi =(v_1,\ldots ,v_ r)$ that corresponds to the r largest eigenvalues, and the $\hat{\bbeta }$ is $H\hat\phi $.

The test statistic for $H_0\colon \bbeta = H\phi $ is given by

\[ T\sum _{i=1}^ r \log \{ (1-\rho _ i)/(1-\lambda _ i)\} \stackrel{d}{\rightarrow } \chi ^2_{r(k-s)} \]

If the series has no deterministic trend, the constant term should be restricted by $\balpha _{\bot }’\bdelta _0 = 0$ as in Case 2. Then $\bH $ is given by

\[ H = \left[ \begin{array}{rrrr} 1 & 0 & 0 & 0\\ -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ \end{array} \right] \]

The following statements test that 2 $\beta _1 + \beta _2 = 0$:

proc varmax data=simul2;
   model y1 y2 / p=2;
   cointeg rank=1 h=(1,-2) normalize=y1;
run;

Figure 42.74 shows the results of testing $H_0\colon 2 \beta _1 +\beta _2 =0$. The input $\bH $ matrix is $H=(1 -2)’$. The adjustment coefficient is reestimated under the restriction, and the test indicates that you cannot reject the null hypothesis.

Figure 42.74: Testing of Linear Restriction (H= Option)

The VARMAX Procedure

Beta Under Restriction
Variable 1
y1 1.00000
y2 -2.00000

Alpha Under Restriction
Variable 1
y1 -0.47404
y2 0.17534

Hypothesis Test
Index Eigenvalue Restricted
Eigenvalue
DF Chi-Square Pr > ChiSq
1 0.4644 0.4616 1 0.51 0.4738



Test for the Weak Exogeneity and Restrictions of Alpha

Consider a vector error correction model:

\[ \Delta \mb{y} _ t = \balpha \bbeta ’\mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} + AD_ t + \bepsilon _ t \]

Divide the process $\mb{y} _ t$ into $(\mb{y} _{1t}’,\mb{y} _{2t}’)’$ with dimension $k_1$ and $k_2$ and the $\Sigma $ into

\begin{eqnarray*} \Sigma = \left[ \begin{array}{cc} \Sigma _{11} & \Sigma _{12} \\ \Sigma _{21} & \Sigma _{22} \end{array} \right] \end{eqnarray*}

Similarly, the parameters can be decomposed as follows:

\begin{eqnarray*} \balpha = \left[ \begin{array}{c} \balpha _1 \\ \balpha _2 \end{array} \right] ~ ~ \Phi ^*_ i = \left[ \begin{array}{c} \Phi ^*_{1i} \\ \Phi ^*_{2i} \end{array} \right] ~ ~ A = \left[ \begin{array}{c} A_{1} \\ A_{2} \end{array} \right] \end{eqnarray*}

Then the VECM(p) form can be rewritten by using the decomposed parameters and processes:

\begin{eqnarray*} \left[ \begin{array}{c} \Delta \mb{y} _{1t} \\ \Delta \mb{y} _{2t} \end{array} \right] = \left[ \begin{array}{c} \balpha _1 \\ \balpha _2 \end{array} \right] \bbeta ’\mb{y} _{t-1} + \sum _{i=1}^{p-1} \left[ \begin{array}{c} \Phi ^*_{1i} \\ \Phi ^*_{2i} \end{array} \right] \Delta \mb{y} _{t-i} + \left[ \begin{array}{c} A_{1} \\ A_{2} \end{array} \right] D_ t + \left[ \begin{array}{c} \bepsilon _{1t} \\ \bepsilon _{2t} \end{array} \right] \end{eqnarray*}

The conditional model for $\mb{y} _{1t}$ given $\mb{y} _{2t}$ is

\begin{eqnarray*} \Delta \mb{y} _{1t} & =& \omega \Delta \mb{y} _{2t} + (\alpha _1-\omega \alpha _2)\bbeta ’\mb{y} _{t-1} + \sum _{i=1}^{p-1}(\Phi ^{*}_{1i} - \omega \Phi ^{*}_{2i})\Delta \mb{y} _{t-i} \\ & & + (A_1 - \omega A_2) D_ t + \bepsilon _{1t} - \omega \bepsilon _{2t} \end{eqnarray*}

and the marginal model of $\mb{y} _{2t}$ is

\[ \Delta \mb{y} _{2t} =\alpha _2\bbeta ’\mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^{*}_{2i}\Delta \mb{y} _{t-i} + A_2 D_ t + \bepsilon _{2t} \]

where $\omega =\Sigma _{12}\Sigma _{22}^{-1}$.

The test of weak exogeneity of $\mb{y} _{2t}$ for the parameters $(\alpha _1, \bbeta )$ determines whether $\alpha _2=0$. Weak exogeneity means that there is no information about $\bbeta $ in the marginal model or that the variables $\mb{y} _{2t}$ do not react to a disequilibrium.

Restriction $H_0\colon \balpha =J\psi $

Consider the null hypothesis $H_0\colon \balpha = J\psi $, where J is a $k\times m$ matrix with $r \leq m < k$.

From the previous residual regression equation

\begin{eqnarray*} {R}_{0t} = \balpha \bbeta ’{R}_{1t} + \hat{\bepsilon }_ t = J\psi \bbeta ’{R}_{1t} + \hat{\bepsilon }_ t \end{eqnarray*}

you can obtain

\begin{eqnarray*} \bar{J}’{R}_{0t} & =& \psi \bbeta ’{R}_{1t} +\bar{J}’\hat{\bepsilon }_ t \\ J_{\bot }’{R}_{0t}& =& J_{\bot }’\hat{\bepsilon }_ t \end{eqnarray*}

where $\bar{J}=J(J’J)^{-1}$ and $J_{\bot }$ is orthogonal to J such that $J_{\bot }’J=0$.

Define

\[ \Sigma _{JJ_{\bot }} = \bar{J}’\Sigma J_{\bot } \mr{~ ~ and~ ~ } \Sigma _{J_{\bot }J_{\bot }} = J_{\bot }’\Sigma J_{\bot } \]

and let $\omega =\Sigma _{JJ_{\bot }}\Sigma _{J_{\bot }J_{\bot }}^{-1}$. Then $\bar{J}’{R}_{0t}$ can be written as

\begin{eqnarray*} \bar{J}’{R}_{0t} = \psi \bbeta ’{R}_{1t} + \omega J_{\bot }’{R}_{0t} + \bar{J}’\hat{\bepsilon }_ t - \omega J_{\bot }’ \hat{\bepsilon }_ t \end{eqnarray*}

Using the marginal distribution of $J_{\bot }’{R}_{0t}$ and the conditional distribution of $\bar{J}’{R}_{0t}$, the new residuals are computed as

\begin{eqnarray*} \tilde{R}_{Jt} & = & \bar{J}’{R}_{0t} - S_{JJ_{\bot }} S_{J_{\bot }J_{\bot }}^{-1}J_{\bot }’{R}_{0t} \\ \tilde{R}_{1t} & = & {R}_{1t} - S_{1J_{\bot }} S_{J_{\bot }J_{\bot }}^{-1}J_{\bot }’{R}_{0t} \end{eqnarray*}

where

\[ S_{JJ_{\bot }} = \bar{J}’S_{00}J_{\bot }, ~ ~ S_{J_{\bot }J_{\bot }} = J_{\bot }’S_{00}J_{\bot }, ~ ~ \mr{and ~ ~ } S_{J_{\bot }1} = J_{\bot }’S_{01} \]

In terms of $\tilde{R}_{Jt}$ and $\tilde{R}_{1t}$, the MLE of $\bbeta $ is computed by using the reduced rank regression. Let

\[ S_{ij\mb{.} J_{\bot }}=\frac{1}{T}\sum _{t=1}^{T}\tilde{{R}}_{it} \tilde{{R}}_{jt}’, \mr{~ ~ for~ ~ } i,j=1,J \]

Under the null hypothesis $H_0\colon \balpha =J\psi $, the MLE $\tilde{\bbeta }$ is computed by solving the equation

\begin{eqnarray*} |\rho S_{11\mb{.} J_{\bot }} - S_{1J\mb{.} J_{\bot }}S_{JJ\mb{.} J_{\bot }}^{-1} S_{J1\mb{.} J_{\bot }}| = 0 \end{eqnarray*}

Then $\tilde{\bbeta }=(v_1,\ldots , v_ r)$, where the eigenvectors correspond to the r largest eigenvalues and are normalized such that $ \tilde{\bbeta }’ S_{11\mb{.} J_{\bot }} \tilde{\bbeta } = I_ r $; $\tilde{\balpha }=J S_{J1\mb{.} J_{\bot }} \tilde{\bbeta }$. The likelihood ratio test for $H_0\colon \balpha =J\psi $ is

\[ T\sum _{i=1}^ r\log \{ (1-\rho _ i)/(1-\lambda _ i)\} \stackrel{d}{\rightarrow } \chi ^2_{r(k-m)} \]

See Theorem 6.1 in Johansen and Juselius (1990) for more details.

The test of weak exogeneity of $\mb{y} _{2t}$ is a special case of the test $\balpha =J\psi $, considering $J=(I_{k_1},0)’$. Consider the previous example with four variables ( $m_ t, y_ t, i_ t^ b, i_ t^ d$ ). If $r=1$, you formulate the weak exogeneity of ($y_ t,i_ t^ b,i_ t^ d$) for $m_ t$ as $J=[0, I_3]’$ and the weak exogeneity of $i_ t^ d$ for ($m_ t, y_ t, i_ t^ b$) as $J = [I_3,0]’$.

The following statements test the weak exogeneity of other variables, assuming $r=1$:

proc varmax data=simul2;
   model y1 y2 / p=2;
   cointeg rank=1 exogeneity normalize=y1;
run;

Figure 42.75 shows that each variable is not the weak exogeneity of other variable.

Figure 42.75: Testing of Weak Exogeneity (EXOGENEITY Option)

The VARMAX Procedure

Testing Weak Exogeneity of
Each Variables
Variable DF Chi-Square Pr > ChiSq
y1 1 53.46 <.0001
y2 1 8.76 0.0031



General Tests and Restrictions on Parameters

The previous sections discuss some special forms of tests on $\bbeta $ and $\balpha $, namely the long-run relations that are expressed in the form $H_0\colon \bbeta = \bH \bphi $, the weak exogeneity test, and the null hypotheses on $\balpha $ in the form $H_0\colon \balpha = \bJ \bpsi $. In fact, with the help of the RESRICT and BOUND statements, you can estimate the models that have linear restrictions on any parameters to be estimated, which means that you can implement the likelihood ratio (LR) test for any linear relationship between the parameters.

The restricted error correction model must be estimated through numerical optimization. You might need to use the NLOPTIONS statement to try different options for the optimizer and the INITIAL statement to try different starting points. This is essentially important because the $\balpha $ and $\bbeta $ are usually not identifiable.

You can also use the TEST statement to apply the Wald test for any linear relationships between parameters that are not long-run . Even more, you can test the constraints on $\Pi (=\balpha \bbeta ’)$ and $\bdelta _0(=\balpha \bbeta _0)$ in Case 2 or $\bdelta _1(=\balpha \bbeta _1)$ in Case 4 when the constant term or linear trend is restricted to the error correction term.

For more information and examples, see the section Analysis of Restricted Cointegrated Systems.

Forecasting of the VECM

Consider the cointegrated moving-average representation of the differenced process of $\mb{y}_ t$

\begin{eqnarray*} \Delta \mb{y} _ t = \bdelta + \Psi (B)\bepsilon _ t \end{eqnarray*}

Assume that $\mb{y} _0=0$. The linear process $\mb{y} _ t$ can be written as

\begin{eqnarray*} \mb{y}_ t = \bdelta t + \sum _{i=1}^ t\sum _{j=0}^{t-i}\Psi _ j\bepsilon _ i \end{eqnarray*}

Therefore, for any $l > 0$,

\begin{eqnarray*} \mb{y} _{t+l} = \bdelta (t+l) + \sum _{i=1}^ t\sum _{j=0}^{t+l-i}\Psi _ j\bepsilon _ i + \sum _{i=1}^ l\sum _{j=0}^{l-i}\Psi _ j\bepsilon _{t+i} \end{eqnarray*}

The l-step-ahead forecast is derived from the preceding equation:

\begin{eqnarray*} \mb{y} _{t+l|t} = (t+l) + \sum _{i=1}^ t\sum _{j=0}^{t+l-i}\Psi _ j\bepsilon _ i \end{eqnarray*}

Note that

\[ \lim _{l\rightarrow \infty } \bbeta ’\mb{y} _{t+l|t} = 0 \]

since $\lim _{l\rightarrow \infty }\sum _{j=0}^{t+l-i}\Psi _ j = \Psi (1)$ and $\bbeta ’ \Psi (1) = 0$. The long-run forecast of the cointegrated system shows that the cointegrated relationship holds, although there might exist some deviations from the equilibrium status in the short-run. The covariance matrix of the predict error $\mb{e} _{t+l|t}=\mb{y} _{t+l}-\mb{y} _{t+l|t}$ is

\[ \Sigma (l) = \sum _{i=1}^{l}[(\sum _{j=0}^{l-i}\Psi _ j)\Sigma (\sum _{j=0}^{l-i}\Psi _ j’)] \]

When the linear process is represented as a VECM(p) model, you can obtain

\begin{eqnarray*} \Delta \mb{y} _ t = \Pi \mb{y} _{t-1} + \sum _{j=1}^{p-1} \Phi ^{*}_ j\Delta \mb{y} _{t-j} + \bdelta + \bepsilon _ t \end{eqnarray*}

The transition equation is defined as

\begin{eqnarray*} \mb{z} _{t} = F \mb{z} _{t-1} + \mb{e} _{t} \end{eqnarray*}

where $\mb{z} _ t=(\mb{y} _{t-1}’,\Delta \mb{y} _{t}’, \Delta \mb{y} _{t-1}’, \cdots ,\Delta \mb{y} _{t-p+2}’)’$ is a state vector and the transition matrix is

\begin{eqnarray*} F = \left[ \begin{array}{cccccc} I_ k & I_ k & 0 & \cdots & 0 \\ \Pi & (\Pi +\Phi ^*_1)& \Phi ^*_2 & \cdots & \Phi ^*_{p-1} \\ 0 & I_ k & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & I_ k & 0 \\ \end{array} \right] \end{eqnarray*}

where 0 is a $k \times k$ zero matrix. The observation equation can be written

\[ \mb{y} _ t = \bdelta t + H \mb{z} _ t \]

where $H=[I_ k,I_ k,0,\ldots ,0]$.

The l-step-ahead forecast is computed as

\begin{eqnarray*} \mb{y} _{t+l|t} = \bdelta (t+l) + H F^ l \mb{z} _ t \end{eqnarray*}

Cointegration with Exogenous Variables

The error correction model with exogenous variables can be written as follows:

\begin{eqnarray*} \Delta \mb{y} _{t} = \balpha \bbeta ’ \mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} + A D_ t + \sum _{i=0}^{s}\Theta ^*_ i\mb{x} _{t-i} + \bepsilon _ t \end{eqnarray*}

The following statements demonstrate how to fit VECMX($p,s$), where $p=2$ and $s=1$ from the P=2 and XLAG=1 options:

proc varmax data=simul3;
   model y1 y2 = x1 / p=2 xlag=1;
   cointeg rank=1;
run;

The following statements demonstrate how to BVECMX(2,1):

proc varmax data=simul3;
   model y1 y2 = x1 / p=2 xlag=1
                      prior=(lambda=0.9 theta=0.1);
   cointeg rank=1;
run;