The VARMAX Procedure

Model Diagnostic Checks

Multivariate Model Diagnostic Checks

Log Likelihood

The log-likelihood function for the fitted model is reported in the LogLikelihood ODS table. The log-likelihood functions for different models are defined as follows:

  • For VARMAX models that are estimated through the (conditional) maximum likelihood method, see the section VARMA and VARMAX Modeling.

  • For Bayesian VAR and VARX models, see the section Bayesian VAR and VARX Modeling.

  • For (Bayesian) vector error correction models, see the section Vector Error Correction Modeling.

  • For multivariate GARCH models, see the section Multivariate GARCH Modeling.

  • For VAR and VARX models that are estimated through the least squares (LS) method, the log likelihood is defined as

    \[ \ell = -\frac{1}{2}(T \log |\tilde{\Sigma }| + k T) \]

    where $\tilde{\Sigma }$ is the maximum likelihood estimate of the innovation covariance matrix, k is the number of dependent variables, and T is the number of observations used in the estimation.

Information Criteria

The information criteria include Akaike’s information criterion (AIC), the corrected Akaike’s information criterion (AICC), the final prediction error criterion (FPE), the Hannan-Quinn criterion (HQC), and the Schwarz Bayesian criterion (SBC, also referred to as BIC). These criteria are defined as

\begin{eqnarray*} \mbox{AIC} & = & -2\ell + 2r \\ \mbox{AICC}& = & -2\ell + 2rT/(T-r-1) \\ \mbox{FPE} & = & (\frac{T+r_ b}{T-r_ b})^{k}|\tilde{\Sigma }| \\ \mbox{HQC} & = & -2\ell + 2r\log (\log (T)) \\ \mbox{SBC} & = & -2\ell + r\log (T) \end{eqnarray*}

where $\ell $ is the log likelihood, r is the total number of parameters in the model, k is the number of dependent variables, T is the number of observations that are used to estimate the model, $r_ b$ is the number of parameters in each mean equation, and $\tilde{\Sigma }$ is the maximum likelihood estimate of $\Sigma $. As suggested by Burnham and Anderson (2004) for least squares estimation, the total number of parameters, r, must include the parameters in the innovation covariance matrix. When comparing models, choose the model that has the smallest criterion values.

See Figure 42.4 earlier in this chapter for an example of the output.

Portmanteau Statistic

The Portmanteau statistic, $Q_ s$, is used to test whether correlation remains on the model residuals. The null hypothesis is that the residuals are uncorrelated. Let $C_{\epsilon }(l)$ be the residual cross-covariance matrices, $\hat\rho _{\epsilon }(l)$ be the residual cross-correlation matrices as

\begin{eqnarray*} C_{\epsilon }(l) = T^{-1} \sum _{t=1}^{T-l} \bepsilon _ t \bepsilon _{t+l}’ \end{eqnarray*}

and

\begin{eqnarray*} \hat\rho _{\epsilon }(l) = \hat V_{\epsilon }^{-1/2} C_{\epsilon }(l) \hat V_{\epsilon }^{-1/2} ~ ~ \mr{and} ~ ~ \hat\rho _{\epsilon }(-l) = \hat\rho _{\epsilon }(l)’ \end{eqnarray*}

where $\hat V_{\epsilon } = \mr{Diag} (\hat\sigma ^2_{11}, \ldots , \hat\sigma ^2_{kk} )$ and $\hat\sigma ^2_{ii}$ are the diagonal elements of $\hat\Sigma $. The multivariate portmanteau test defined in Hosking (1980) is

\begin{eqnarray*} Q_ s = T^2 \sum _{l=1}^ s (T-l)^{-1} \mr{tr} \{ \hat\rho _{\epsilon }(l)\hat\rho _{\epsilon }(0)^{-1} \hat\rho _{\epsilon }(-l)\hat\rho _{\epsilon }(0)^{-1} \} \end{eqnarray*}

The statistic $Q_ s$ has approximately the chi-square distribution with $k^2(s-p-q)$ degrees of freedom. An example of the output is displayed in Figure 42.7.

Univariate Model Diagnostic Checks

There are various ways to perform diagnostic checks for a univariate model. For details, see the section Testing for Nonlinear Dependence: Heteroscedasticity Tests in Chapter 9: The AUTOREG Procedure. An example of the output is displayed in Figure 42.8 and Figure 42.9.

  • Durbin-Watson (DW) statistics: The DW test statistics test for the first order autocorrelation in the residuals.

  • Jarque-Bera normality test: This test is helpful in determining whether the model residuals represent a white noise process. This tests the null hypothesis that the residuals have normality.

  • F tests for autoregressive conditional heteroscedastic (ARCH) disturbances: F test statistics test for the heteroscedastic disturbances in the residuals. This tests the null hypothesis that the residuals have equal covariances

  • F tests for AR disturbance: These test statistics are computed from the residuals of the univariate AR(1), AR(1,2), AR(1,2,3) and AR(1,2,3,4) models to test the null hypothesis that the residuals are uncorrelated.