The AUTOREG Procedure

Goodness-of-Fit Measures and Information Criteria

This section discusses various goodness-of-fit statistics produced by the AUTOREG procedure.

Total R-Square Statistic

The total R-Square statistic (Total Rsq) is computed as

\[  \mr {R}^{2}_{\mr {tot}} = 1-\frac{\mr {SSE}}{\mr {SST}}  \]

where SST is the sum of squares for the original response variable corrected for the mean and SSE is the final error sum of squares. The Total Rsq is a measure of how well the next value can be predicted using the structural part of the model and the past values of the residuals. If the NOINT option is specified, SST is the uncorrected sum of squares.

Regression R-Square Statistic

The regression R-Square statistic (Reg RSQ) is computed as

\[  \mr {R}^{2}_{\mr {reg}} = 1-\frac{\mr {TSSE}}{\mr {TSST}}  \]

where TSST is the total sum of squares of the transformed response variable corrected for the transformed intercept, and TSSE is the error sum of squares for this transformed regression problem. If the NOINT option is requested, no correction for the transformed intercept is made. The Reg RSQ is a measure of the fit of the structural part of the model after transforming for the autocorrelation and is the R-Square for the transformed regression.

The regression R-Square and the total R-Square should be the same when there is no autocorrelation correction (OLS regression).

Mean Absolute Error and Mean Absolute Percentage Error

The mean absolute error (MAE) is computed as

\[  \mr {MAE}=\frac{1}{T}\sum _{t=1}^ T|e_ t|  \]

where $e_ t$ are the estimated model residuals and $T$ is the number of observations.

The mean absolute percentage error (MAPE) is computed as

\[  \mr {MAPE}=\frac{1}{T}\sum _{t=1}^{T}\delta _{y_ t\ne 0}\frac{|e_{t}|}{|y_{t}|}  \]

where $e_{t}$ are the estimated model residuals, $y_{t}$ are the original response variable observations, $\delta _{y_ t\ne 0} = 1$ if $y_ t\ne 0$, $\delta _{y_ t\ne 0}\left|{e_{t}}/{y_{t}}\right|=0$ if $y_ t=0$, and $T’$ is the number of nonzero original response variable observations.

Calculation of Recursive Residuals and CUSUM Statistics

The recursive residuals ${w_{t}}$ are computed as

\[  w_{t} = \frac{e_{t}}{\sqrt {v_{t}}}  \]
\[  e_{t} = y_ t-x_ t’\beta ^{(t)}  \]
\[  \beta ^{(t)}= \left[ \sum _{i=1}^{t-1} \mb {x} _{i}\mb {x} _{i}’\right]^{-1}\left(\sum _{i=1}^{t-1} \mb {x} _{i}y_{i}\right)  \]
\[  v_{t} = 1 + {\mb {x}’}_{t} \left[ \sum _{i=1}^{t-1} \mb {x} _{i}\mb {x} _{i}’\right]^{-1}\mb {x} _{t}  \]

Note that the first $\beta ^{(t)}$ can be computed for $t=p+1$, where $p$ is the number of regression coefficients. As a result, first $p$ recursive residuals are not defined. Note also that the forecast error variance of ${e_{t}}$ is the scalar multiple of ${v_{t}}$ such that ${V(e_{t})= {\sigma }^{2} v_{t}}$.

The CUSUM and CUSUMSQ statistics are computed using the preceding recursive residuals.

\[  \mr {CUSUM}_{t} = \sum _{i=k+1}^{t}{\frac{w_{i}}{{\sigma }_{w}}}  \]
\[  \mr {CUSUMSQ}_{t} = \frac{\sum _{i=k+1}^{t}{w^{2}_{i}}}{\sum _{i=k+1}^{T}{w^{2}_{i}}}  \]

where ${w_{i}}$ are the recursive residuals,

\[  {\sigma }_{w} = \sqrt {\frac{\sum _{i=k+1}^{T}{(w_{i}-\hat{w})^{2}}}{(\mi {T} -k-1)}}  \]
\[  \hat{w} = \frac{1}{\mi {T} -k} \sum _{i=k+1}^{T}{w_{i}}  \]

and ${k}$ is the number of regressors.

The CUSUM statistics can be used to test for misspecification of the model. The upper and lower critical values for CUSUM$_{t}$ are

\[  {\pm } a \left[ \sqrt {\mi {T} -k} + 2\frac{(t-k)}{(\mi {T} -k)^{\frac{1}{2}}}\right]  \]

where a = 1.143 for a significance level 0.01, 0.948 for 0.05, and 0.850 for 0.10. These critical values are output by the CUSUMLB= and CUSUMUB= options for the significance level specified by the ALPHACSM= option.

The upper and lower critical values of CUSUMSQ$_{t}$ are given by

\[  {\pm } a + \frac{(t-k)}{\mi {T} -k}  \]

where the value of a is obtained from the table by Durbin (1969) if the ${\frac{1}{2}(\mi {T} -k)-1 \le 60}$. Edgerton and Wells (1994) provided the method of obtaining the value of a for large samples.

These critical values are output by the CUSUMSQLB= and CUSUMSQUB= options for the significance level specified by the ALPHACSM= option.

Information Criteria AIC, AICC, SBC, and HQC

Akaike’s information criterion (AIC), the corrected Akaike’s information criterion (AICC), Schwarz’s Bayesian information criterion (SBC), and the Hannan-Quinn information criterion (HQC), are computed as follows:

$\displaystyle  \mr {AIC}  $
$\displaystyle = -2{\ln }(L) + 2 k $
$\displaystyle \mr {AICC}  $
$\displaystyle = \mr {AIC} +2\frac{k(k+1)}{N-k-1} $
$\displaystyle \mr {SBC}  $
$\displaystyle = -2{\ln }(L) + {\ln }(N) k $
$\displaystyle \mr {HQC}  $
$\displaystyle = -2{\ln }(L) + 2 {\ln }({\ln }(N)) k  $

In these formulas, L is the value of the likelihood function evaluated at the parameter estimates, N is the number of observations, and k is the number of estimated parameters. Refer to Judge et al. (1985), Hurvich and Tsai (1989), Schwarz (1978) and Hannan and Quinn (1979) for additional details.