The HPQUANTSELECT Procedure

Statistical Tests for Significance Level

The HPQUANTSELECT procedure supports the significance level (SL) criterion for effect selection. Consider the general form of a linear quantile regression model:

\[ Q_ Y(\tau |\mb{x}_1,\mb{x}_2)=\mb{x}_1’\bbeta _1(\tau )+\mb{x}_2’\bbeta _2(\tau ) \]

At each step of an effect-selection process, a candidate effect can be represented as $\mb{x}_2$, and the significance level of the candidate effect can be calculated by testing the null hypothesis: $H_0:\bbeta _2(\tau )=\mb{0}$.

When you use SL as a criterion for effect selection, you can further use the TEST= option in the SELECTION statement to specify a statistical test method to compute the significance-level values as follows:

  • The TEST=WALD option specifies the Wald test. Let $\hat{\bbeta }(\tau )=\left(\hat{\bbeta }’_1(\tau ),\hat{\bbeta }’_2(\tau )\right)’$ be the parameter estimates for the extended model, and denote the estimated covariance matrix of $\hat{\bbeta }(\tau )$ as

    \[ \hat{\Sigma }(\tau )=\left[ \begin{array}{cc} \hat{\Sigma }_{11}(\tau )& \hat{\Sigma }_{12}(\tau )\\ \hat{\Sigma }_{21}(\tau )& \hat{\Sigma }_{22}(\tau ) \end{array} \right] \]

    where $\hat{\Sigma }_{22}(\tau )$ is the covariance matrix for $\hat{\bbeta }_2(\tau )$. Then the Wald test score is defined as

    \[ \hat{\bbeta }’_2(\tau )\hat{\Sigma }_{22}^{-1}(\tau )\hat{\bbeta }_2(\tau ) \]

    If you specify the SPARSITY(IID) option in the MODEL statement, $\hat{\Sigma }(\tau )$ is estimated under the iid errors assumption. Otherwise, $\hat{\Sigma }(\tau )$ is estimated by using non-iid settings. For more information about the linear model with iid errors and non-iid settings, see the section Quantile Regression.

  • The TEST=LR1 or TEST=LR2 option specifies the Type I or Type II quasi-likelihood ratio test, respectively. Under the iid assumption, Koenker and Machado (1999) propose two types of quasi-likelihood ratio tests for quantile regression, where the error distribution is flexible but not limited to the asymmetric Laplace distribution. The Type I test score, LR1, is defined as

    \[ {2(D_1(\tau )-D_2(\tau ))\over \tau (1-\tau )\hat{s}} \]

    where $D_1(\tau )=\sum \rho _\tau \left(y_ i-\mb{x}_{1i}\hat{\bbeta }_{1_1}(\tau )\right)$ is the sum of check losses for the reduced model, $D_2(\tau )=\sum \rho _\tau \left(y_ i-\mb{x}_{1i}\hat{\bbeta }_{1_2}(\tau )- \mb{x}_{2i}\hat{\bbeta }_2(\tau )\right)$ is the sum of check losses for the extended model, and $\hat{s}$ is the estimated sparsity function. The Type II test score, LR2, is defined as

    \[ {2D_2(\tau )\left(\log (D_1(\tau ))-\log (D_2(\tau ))\right)\over \tau (1-\tau )\hat{s}} \]

Under the null hypothesis that the reduced model is the true model, the Wald score, LR1 score, and LR2 score all follow a $\chi ^2$ distribution with degrees of freedom $\mathit{df}=\mathit{df}_2-\mathit{df}_1$, where $\mathit{df}_1$ and $\mathit{df}_2$ are the degrees of freedom for the reduced model and the extended model, respectively .

When you use SL as a criterion for effect selection, the algorithm for estimating sparsity function depends on whether an effect is being considered as an add or a drop candidate. For testing an add candidate effect, the sparsity function, which is $s(\tau )$ under the iid error assumption or $s_ i(\tau )$ for non-iid settings, is estimated on the reduced model that does not include the add candidate effect. For testing a drop candidate effect, the sparsity function is estimated on the extended model that does not exclude the drop candidate effect. Then, these estimated sparsity function values are used to compute LR1 or LR2 and the covariance matrix of the parameter estimates for the extended model. However, for the model that is selected at each step, the sparsity function for estimating standard errors and confidence limits of the parameter estimates is estimated on that model itself, but not on the model that was selected at the preceding step.

Because the null hypotheses usually do not hold, the SLENTRY and SLSTAY values cannot reliably be viewed as probabilities. One way to address this difficulty is to replace hypothesis testing as a means of selecting a model with information criteria or out-of-sample prediction criteria.

Table 59.6 provides formulas and definitions for these fit statistics.

Table 59.6: Formulas and Definitions for Model Fit Summary Statistics for Single Quantile Effect Selection

Statistic

Definition or Formula

n

Number of observations

p

Number of parameters, including the intercept

$r_ i(\tau )$

Residual for the ith observation; $r_{i}(\tau ) = y_ i-\mb{x}_ i\hat{{\mbox{\boldmath $\beta $}}}(\tau )$

$D(\tau )$

Total sum of check losses; $D(\tau )=\sum _{i=1}^ n \rho _\tau (r_{i})$. $D(\tau )$ is labeled as Objective Function in the "Fit Statistics" table.

$D_0(\tau )$

Total sum of check losses for intercept-only model if the intercept is a forced-in effect; otherwise for empty model.

$\mbox{ACL}(\tau )$

Average check loss; $\displaystyle \mbox{ACL}(\tau ) ={D(\tau )\over n}$

$\mbox{R1}(\tau )$

Counterpart of linear regression R square for quantile regression; $\displaystyle \mbox{R1}(\tau )=1- {D(\tau )\over D_0(\tau )}$

$\mbox{ADJR1}(\tau )$

Adjusted R1; $\displaystyle 1-{(n-1)D(\tau )\over (n-p)D_0(\tau )}$ if intercept is a forced-in effect; otherwise $\displaystyle 1-{nD(\tau )\over (n-p)D_0(\tau )}$.

$\mbox{AIC}(\tau )$

$\displaystyle 2n\ln \left( \mbox{ACL}(\tau ) \right) + 2p$

$\mbox{AICC}(\tau )$

$\displaystyle 2n\ln \left( \mbox{ACL}(\tau ) \right) + {2pn\over n-p-1}$

$\mbox{SBC}(\tau )$

$\displaystyle 2n\ln \left( \mbox{ACL}(\tau ) \right) + p \ln (n) $


The $\mbox{ADJR1}(\tau )$ criterion is equivalent to the generalized approximate cross validation (GACV) criterion for quantile regression (Yuan 2006). The GACV criterion is defined as

\[ \mbox{GACV}(\tau )=D(\tau )/ (n-p) \]

which is proportional to $1-\mbox{ADJR1}(\tau )$.