The VARMAX Procedure

I(2) Model

Subsections:

The VARX(p,s) model can be written in the error correction form:

\begin{eqnarray*}  \Delta \mb{y} _{t} = \balpha \bbeta ’ \mb{y} _{t-1} + \sum _{i=1}^{p-1} \Phi ^*_ i \Delta \mb{y} _{t-i} + A D_ t + \sum _{i=0}^{s}\Theta ^*_ i\mb{x} _{t-i} + \bepsilon _ t \end{eqnarray*}

Let $\Phi ^* = I_ k - \sum _{i=1}^{p-1} \Phi ^*_ i$.

If $\balpha $ and $\bbeta $ have full-rank r, and $ rank(\balpha ’_{\bot } \Phi ^* \bbeta _{\bot }) =k-r$, then $\mb{y} _{t}$ is an $I(1)$ process.

If the condition $rank(\balpha ’_{\bot } \Phi ^* \bbeta _{\bot }) =k-r$ fails and $\balpha ’_{\bot } \Phi ^* \bbeta _{\bot }$ has reduced-rank $\balpha ’_{\bot } \Phi ^* \bbeta _{\bot }=\bxi \bm {\eta }’$ where $\bxi $ and $\bm {\eta }$ are $(k-r)\times s$ matrices with $s\leq k-r$, then $\balpha _{\bot }$ and $\bbeta _{\bot }$ are defined as $k\times (k-r)$ matrices of full rank such that $\balpha ’\balpha _{\bot }=0$ and $\bbeta ’\bbeta _{\bot }=0$.

If $\bxi $ and $\bm {\eta }$ have full-rank s, then the process $\mb{y} _ t$ is $I(2)$, which has the implication of $I(2)$ model for the moving-average representation.

\begin{eqnarray*}  \mb{y} _ t = B_0 + B_1 t + C_2\sum _{j=1}^ t\sum _{i=1}^ j\bepsilon _ i + C_1\sum _{i=1}^ t\bepsilon _ i + C_0(B)\bepsilon _ t \end{eqnarray*}

The matrices $C_1$, $C_2$, and $C_0(B)$ are determined by the cointegration properties of the process, and $B_0$ and $B_1$ are determined by the initial values. For details, see Johansen (1995b).

The implication of the $I(2)$ model for the autoregressive representation is given by

\begin{eqnarray*}  \Delta ^2 \mb{y} _{t} = \Pi \mb{y} _{t-1} -\Phi ^* \Delta \mb{y} _{t-1} + \sum _{i=1}^{p-2} \Psi _ i \Delta ^2 \mb{y} _{t-i} + A D_ t + \sum _{i=0}^{s}\Theta ^*_ i\mb{x} _{t-i} +\bepsilon _ t \end{eqnarray*}

where $\Psi _ i = -\sum _{j=i+1}^{p-1} \Phi ^*_ i$ and $\Phi ^* = I_ k - \sum _{i=1}^{p-1} \Phi ^*_ i$.

Test for I(2)

The $I(2)$ cointegrated model is given by the following parameter restrictions:

\begin{eqnarray*}  H_{r,s}\colon \Pi =\balpha \bbeta ’\; \mbox{and}\;  \balpha _{\bot }’\Phi ^* \bbeta _{\bot } = \bxi \bm {\eta }’ \end{eqnarray*}

where $\bxi $ and $\bm {\eta }$ are $(k-r)\times s$ matrices with $0\leq s \leq k-r$. Let $H_ r^0$ represent the $I(1)$ model where $\balpha $ and $\bbeta $ have full-rank r, let $H_{r,s}^0$ represent the $I(2)$ model where $\bxi $ and $\bm {\eta }$ have full-rank s, and let $H_{r,s}$ represent the $I(2)$ model where $\bxi $ and $\bm {\eta }$ have rank $\leq s$. The following table shows the relation between the $I(1)$ models and the $I(2)$ models.

Table 35.6: Relation between the $I(1)$ and $I(2)$ Models

         

$I(2)$

       

$I(1)$

 

$r \backslash k-r-s$

k

 

k-1

 

$\cdots $

 

$ 1$

       

0

$H_{00}$

$\subset $

$H_{01}$

$\subset $

$\cdots $

$\subset $

$H_{0,k-1}$

$\subset $

$H_{0k}$

=

$H_{0}^0$

1

   

$H_{10}$

$\subset $

$\cdots $

$\subset $

$H_{1,k-2}$

$\subset $

$H_{1,k-1}$

=

$H_{1}^0$

$\vdots $

           

$\vdots $

$\vdots $

$\vdots $

$\vdots $

$\vdots $

$k-1$

           

$H_{k-1,0}$

$\subset $

$H_{k-1,1}$

=

$H_{k-1}^0$


Johansen (1995b) proposed the two-step procedure to analyze the $I(2)$ model. In the first step, the values of $(r, \balpha , \bbeta )$ are estimated using the reduced rank regression analysis, performing the regression analysis $\Delta ^2\mb{y} _{t}$, $\Delta \mb{y} _{t-1}$, and $\mb{y} _{t-1}$ on $\Delta ^2\mb{y} _{t-1},\ldots ,\Delta ^2\mb{y} _{t-p+2},$ and $D_ t$. This gives residuals $R_{0t}$, $R_{1t}$, and $R_{2t}$, and residual product moment matrices

\[  M_{ij} = \frac{1}{T} \sum _{t=1}^ TR_{it}R_{jt}’ ~ ~ \mr{for~ ~ } i,j=0,1,2  \]

Perform the reduced rank regression analysis $\Delta ^2\mb{y} _{t}$ on $\mb{y} _{t-1}$ corrected for $\Delta \mb{y} _{t-1}$, $\Delta ^2\mb{y} _{t-1},\ldots ,\Delta ^2\mb{y} _{t-p+2},$ and $D_ t$, and solve the eigenvalue problem of the equation

\[  |\lambda M_{22\mb{.} 1} - M_{20\mb{.} 1}M_{00\mb{.} 1}^{-1}M_{02\mb{.} 1}| = 0  \]

where $M_{ij\mb{.} 1} = M_{ij} - M_{i1}M_{11}^{-1}M_{1j}$ for $i,j=0,2$.

In the second step, if $(r, \balpha , \bbeta )$ are known, the values of $(s, \bxi , \bm {\eta })$ are determined using the reduced rank regression analysis, regressing $\hat{\balpha }_{\bot }’\Delta ^2\mb{y} _{t}$ on $\hat{\bbeta }_{\bot }’\Delta \mb{y} _{t-1}$ corrected for $\Delta ^2\mb{y} _{t-1},\ldots ,\Delta ^2\mb{y} _{t-p+2},D_ t$, and $\hat{\bbeta }’\Delta \mb{y} _{t-1}$.

The reduced rank regression analysis reduces to the solution of an eigenvalue problem for the equation

\begin{eqnarray*}  |\rho M_{\bbeta _{\bot }\bbeta _{\bot }\mb{.} \bbeta } - M_{\bbeta _{\bot }\balpha _{\bot }\mb{.} \bbeta } M_{\balpha _{\bot }\balpha _{\bot }\mb{.} \bbeta }^{-1} M_{\balpha _{\bot }\bbeta _{\bot }\mb{.} \bbeta }| = 0 \end{eqnarray*}

where

\begin{eqnarray*}  M_{\bbeta _{\bot }\bbeta _{\bot }\mb{.} \bbeta } &  = &  \bbeta _{\bot }’(M_{11} - M_{11}\bbeta (\bbeta ’M_{11}\bbeta )^{-1}\bbeta ’M_{11})\bbeta _{\bot } \\ M_{\bbeta _{\bot }\balpha _{\bot }\mb{.} \bbeta }’ &  = &  M_{\balpha _{\bot }\bbeta _{\bot }\mb{.} \bbeta } ~ =~  \bar{\balpha }_{\bot }’(M_{01} - M_{01}\bbeta (\bbeta ’M_{11}\bbeta )^{-1}\bbeta ’M_{11})\bbeta _{\bot } \\ M_{\balpha _{\bot }\balpha _{\bot }\mb{.} \bbeta } &  = &  \bar{\balpha }_{\bot }’(M_{00} - M_{01}\bbeta (\bbeta ’M_{11}\bbeta )^{-1}\bbeta ’M_{10})\bar{\balpha }_{\bot } \end{eqnarray*}

where $\bar{\balpha }=\balpha (\balpha ’\balpha )^{-1}$.

The solution gives eigenvalues $1>\rho _1>\cdots >\rho _ s>0$ and eigenvectors $ (v_1,\ldots , v_ s)$. Then, the ML estimators are

\begin{eqnarray*}  \hat{\bm {\eta }} &  = &  (v_1,\ldots , v_ s) \\ \hat{\bxi } &  = &  M_{\balpha _{\bot }\bbeta _{\bot }\mb{.} \bbeta }\hat{\eta } \end{eqnarray*}

The likelihood ratio test for the reduced rank model $H_{r,s}$ with rank $\leq s$ in the model $H_{r,k-r} = H_ r^0$ is given by

\begin{eqnarray*}  Q_{r,s} = -T\sum _{i=s+1}^{k-r}\log (1-\rho _ i), ~ ~ s=0,\ldots ,k-r-1 \end{eqnarray*}

The following statements simulate an I(2) process and compute the rank test to test for cointegrated order 2:

proc iml;
   alpha  = { 1, 1};    * alphaOrthogonal = { 1, -1};
   beta   = { 1, -0.5}; * betaOrthogonal  = { 1, 2};
   * alphaOrthogonal' * phiStar * betaOrthogonal = 0;
   phiStar = { 1 0, 0 0.5};
   A1 = 2 * I(2) + alpha * beta` - phiStar;
   A2 = phiStar - I(2);
   phi = A1 // A2;
   sig = I(2);
   /* to simulate the vector time series */
   call varmasim(y,phi) sigma=sig n=200 seed=2;
   cn = {'y1' 'y2'};
   create simul4 from y[colname=cn];
   append from y;
   close;
quit;

proc varmax data=simul4;
   model y1 y2 /noint p=2 cointtest=(johansen=(iorder=2));
run;

The last two columns in Figure 35.60 explain the cointegration rank test with integrated order 1. For a specified significance level, such as 5%, the output indicates that the null hypothesis that the series are not cointegrated (H0: $r=0$) is rejected, because the p-value for this test, shown in the column Pr > Trace of I(1), is less than 0.05. The results also indicate that the null hypothesis that there is a cointegrated relationship with cointegration rank 1 (H0: $r=1$) cannot be rejected at the 5% significance level, because the p-value for the test statistic, 0.7961, is greater than 0.05. Because of this latter result, the rows in the table that are associated with $r=1$ are further examined. The test statistic, 0.0257, tests the null hypothesis that the series are cointegrated order 2. The p-value that is associated with this test is 0.8955, which indicates that the null hypothesis cannot be rejected at the 5% significance level.

Figure 35.60: Cointegrated I(2) Test (IORDER= Option)

The VARMAX Procedure

Cointegration Rank Test for I(2)
r\k-r-s 2 1 Trace
of I(1)
Pr > Trace
of I(1)
0 575.3784 1.1833 215.3011 <.0001
Pr > Trace of I(2) 0.0000 0.3223    
1   0.0257 0.0986 0.7961
Pr > Trace of I(2)   0.8955