The QUANTREG Procedure

Optimization Algorithms

The optimization problem for median regression has been formulated and solved as a linear programming (LP) problem since the 1950s. Variations of the simplex algorithm, especially the method of Barrodale and Roberts (1973), have been widely used to solve this problem. The simplex algorithm is computationally demanding in large statistical applications, and in theory the number of iterations can increase exponentially with the sample size. This algorithm is often useful with data containing no more than tens of thousands of observations.

Several alternatives have been developed to handle $L_1$ regression for larger data sets. The interior point approach of Karmarkar (1984) solves a sequence of quadratic problems in which the relevant interior of the constraint set is approximated by an ellipsoid. The worst-case performance of the interior point algorithm has been proved to be better than that of the simplex algorithm. More important, experience has shown that the interior point algorithm is advantageous for larger problems.

Like $L_1$ regression, general quantile regression fits nicely into the standard primal-dual formulations of linear programming.

In addition to the interior point method, various heuristic approaches are available for computing $L_1$-type solutions. Among these, the finite smoothing algorithm of Madsen and Nielsen (1993) is the most useful. It approximates the $L_1$-type objective function with a smoothing function, so that the Newton-Raphson algorithm can be used iteratively to obtain a solution after a finite number of iterations. The smoothing algorithm extends naturally to general quantile regression.

The QUANTREG procedure implements the simplex, interior point, and smoothing algorithms. The remainder of this section describes these algorithms in more detail.

Simplex Algorithm

Let $\bm {\mu } = [\mb {y}-\bA ^{\prime }\bbeta ]_+$, $\bm {\nu } = [\bA ^{\prime }\bbeta -\mb {y}]_+$, $\bm {\phi }=[\bbeta ]_+$, and $\bm {\varphi }=[-\bbeta ]_+$, where $[z]_+$ is the nonnegative part of z.

Let $D_{\mathit{LAR}}(\bbeta ) = \sum _{i=1}^ n |y_ i - \mb {x}_ i^{\prime }\bbeta |$. For the $L_1$ problem, the simplex approach solves $\min _{\bbeta } D_{\mathit{LAR}}(\bbeta )$ by reformulating it as the constrained minimization problem

\[ \min _{\bbeta } \{  \mb {e}^{\prime } \bm {\mu } + \mb {e}^{\prime } \bm {\nu } | \mb {y} = \bA ^{\prime }\bbeta + \bm {\mu } - \bm {\nu }, \{ \bm {\mu }, \bm {\nu }\} \in \mb {R}_{+}^{n} \}  \]

where $\mb {e}$ denotes an $(n \times 1)$ vector of ones.

Let $\bB =[ \bA ^{\prime }\  -\bA ^{\prime }\  \bI \  -\bI ]$, $\bm {\theta } = (\bm {\phi }^{\prime }\   {\bm {\varphi }}^{\prime }\  \bm {\mu }^{\prime }\   \bm {\nu }^{\prime })^{\prime }$, and $\mb {d}=(\mb {0}^{\prime }\   \mb {0}^{\prime }\   \mb {e}^{\prime }\   \mb {e}^{\prime })^{\prime }$, where $\mb {0}^{\prime } = {(0\  0\  \ldots \  0)}_ p$. The reformulation presents a standard LP problem:

$\displaystyle  $
$\displaystyle (\Mathtext{P}) {\hskip28.4527559055pt}  $
$\displaystyle \min _{\bm {\theta }} \mb {d}^{\prime }\bm {\theta }  $
$\displaystyle  $
$\displaystyle  {\mbox{subject to }}  $
$\displaystyle \bB \bm {\theta } = \mb {y} $
$\displaystyle  $
$\displaystyle  $
$\displaystyle  \bm {\theta } \geq \mb {0}  $

This problem has the dual formulation

$\displaystyle  $
$\displaystyle (\Mathtext{D}) {\hskip28.4527559055pt}  $
$\displaystyle \max _{z} \mb {y}^{\prime }\mb {z}  $
$\displaystyle  $
$\displaystyle {\mbox{subject to }}  $
$\displaystyle \bB ^{\prime }\mb {z} \leq \mb {d}  $

which can be simplified as

\[  \max _{z} \mb {y}^{\prime } \mb {z} ; \  \  \mbox{subject to } \bA \mb {z} = \mb {0}, \mb {z} \in [-1, 1]^ n  \]

By setting $\bm {\eta } ={\frac12}\mb {} + {\frac12}\mb {e}, \mb {b}={\frac12} \bA \mb {e}$, the problem becomes

\[  \max _{\bm {\eta }} \mb {y}^{\prime }\bm {\eta }; \  \   \mbox{subject to } \bA \bm {\eta } = \mb {b}, \bm {\eta } \in [0, 1]^ n  \]

For quantile regression, the minimization problem is $\min _\bbeta \sum \rho _\tau (y_ i-\mb {x}_ i^{\prime }\bbeta )$, and a similar set of steps leads to the dual formulation

\[ \max _{z} \mb {y}^{\prime }\mb {z}; \  \   \mbox{subject to } \bA \mb {z} = (1-\tau ) \bA \mb {e}, \mb {z} \in [0, 1]^ n  \]

The QUANTREG procedure solves this LP problem by using the simplex algorithm of Barrodale and Roberts (1973). This algorithm solves the primary LP problem (P) by two stages, which exploit the special structure of the coefficient matrix $\bB $. The first stage picks the columns in $\bA ^{\prime }$ or $-\bA ^{\prime }$ as pivotal columns. The second stage interchanges the columns in $\bI $ or –$\bI $ as basis or nonbasis columns, respectively. The algorithm obtains an optimal solution by executing these two stages interactively. Moreover, because of the special structure of $\bB $, only the main data matrix $\bA $ is stored in the current memory.

Although this special version of the simplex algorithm was introduced for median regression, it extends naturally to quantile regression for any given quantile and even to the entire quantile process (Koenker and d’Orey, 1994). It greatly reduces the computing time required by the general simplex algorithm, and it is suitable for data sets with fewer than 5,000 observations and 50 variables.

Interior Point Algorithm

There are many variations of interior point algorithms. The QUANTREG procedure uses the primal-dual predictor-corrector algorithm implemented by Lustig, Marsten, and Shanno (1992). The text by Roos, Terlaky, and Vial (1997) provides more information about this particular algorithm. The following brief introduction of this algorithm uses the notation in the first reference.

To be consistent with the conventional LP setting, let $\mb {c}=-\mb {y}$, $\mb {b}=(1-\tau ) \bA \mb {e}$, and let u be the general upper bound. The linear program to be solved is

$\min \{ \mb {c}^{\prime } \mb {z}\} $

subject to

$\bA \mb {z} = \mb {b}$

$\mb {0} \leq \mb {z} \leq \mb {u}$

To simplify the computation, this is treated as the primal problem. The problem has n variables. The index i denotes a variable number, and k denotes an iteration number. If k is used as a subscript or superscript, it denotes of iteration k.

Let $\mb {v}$ be the primal slack so that $\mb {z} + \mb {v} = \mb {u}$. Associate dual variables w with these constraints. The interior point algorithm solves the system of equations to satisfy the Karush-Kuhn-Tucker (KKT) conditions for optimality:

$\displaystyle  \bA \mb {z} = \mb {b} $
$\displaystyle \mb {z} + \mb {v} = \mb {u}  $
$\displaystyle \bA ^{\prime } \mb {t} + \mb {s} - \mb {w} = \mb {c}  $
$\displaystyle \bZ \bS \mb {e} = \mb {0}  $
$\displaystyle \bV \bW \mb {e} = \mb {0}  $
$\displaystyle \mb {z},\mb {s},\mb {v},\mb {w} \geq \mb {0}  $
$\displaystyle \text {where} $
$\displaystyle \bW = \mr {diag}(\mb {w}) \text {(that is, $W_{i,j} = w_ i$ if $i=j$, $W_{i,j} = 0$ otherwise)} $
$\displaystyle \bV =\mr {diag}(\mb {v}), \bZ =\mr {diag}(\mb {z}), \bS =\mr {diag}(\mb {s}) $

These are the conditions for feasibility, with the addition of complementarity conditions $\bZ \bS \mb {e} = \mb {0}$ and $\bV \bW \mb {e} = \mb {0}$. $\mb {c}^{\prime } \mb {z} = \mb {b}^{\prime } \mb {t} - \mb {u}^{\prime } \mb {w}$ must occur at the optimum. Complementarity forces the optimal objectives of the primal and dual to be equal, $\mb {c}^{\prime } \mb {z}_{\mathit{opt}} = \mb {b}^{\prime } \mb {t}_{\mathit{opt}} - \mb {u}^{\prime } \mb {w}_{\mathit{opt}}$, as

$\displaystyle  0  $
$\displaystyle = \mb {v}^{\prime }_{\mathit{opt}} \mb {w}_{\mathit{opt}} = (\mb {u} - \mb {z}_{\mathit{opt}})^{\prime } \mb {w}_{\mathit{opt}} = \mb {u}^{\prime } \mb {w}_{\mathit{opt}} - \mb {z}^{\prime }_{\mathit{opt}} \mb {w}_{\mathit{opt}} $
$\displaystyle 0  $
$\displaystyle = \mb {z}^{\prime }_{\mathit{opt}} \mb {s}_{\mathit{opt}} = \mb {s}^{\prime }_{\mathit{opt}} \mb {z}_{\mathit{opt}} = (\mb {c} - \bA ^{\prime } \mb {t}_{\mathit{opt}} + \mb {w}_{\mathit{opt}})^{\prime } \mb {z}_{\mathit{opt}} $
$\displaystyle  $
$\displaystyle = \mb {c}^{\prime } \mb {z}_{\mathit{opt}} - \mb {t}^{\prime }_{\mathit{opt}} (\bA \mb {z}_{\mathit{opt}}) + \mb {w}_{\mathit{opt}}^{\prime } \mb {z}_{\mathit{opt}}= \mb {c}^{\prime } \mb {z}_{\mathit{opt}} - \mb {b}^{\prime } \mb {t}_{\mathit{opt}} + \mb {u}^{\prime } \mb {w}_{\mathit{opt}} $


\[ 0 = \mb {c}^{\prime } \mb {z}_{\mathit{opt}} - \mb {b}^{\prime } \mb {t}_{\mathit{opt}} + \mb {u}^{\prime } \mb {w}_{\mathit{opt}}  \]

The duality gap, $\mb {c}^{\prime } \mb {z} - \mb {b}^{\prime } \mb {t} + \mb {u}^{\prime } \mb {w}$, is used to measure the convergence of the algorithm. You can specify a tolerance for this convergence criterion with the TOLERANCE= option in the PROC QUANTREG statement.

Before the optimum is reached, it is possible for a solution $(\mb {z}, \mb {t}, \mb {s}, \mb {v}, \mb {w})$ to violate the KKT conditions in one of several ways:

  • Primal bound constraints can be broken, ${\bdelta }_ b = \mb {u} - \mb {z} - \mb {v} \neq \mb {0}$.

  • Primal constraints can be broken, ${\bdelta }_ c = \mb {b} - \bA \mb {z} \neq \mb {0}$.

  • Dual constraints can be broken, ${\bdelta }_ ds = \mb {c} - \bA ^{\prime } \mb {t} - \mb {s} + \mb {w}\neq \mb {0}$.

  • Complementarity conditions are unsatisfied, $\mb {z}^{\prime } \mb {s} \neq 0$ and $\mb {v}^{\prime } \mb {w} \neq 0$.

The interior point algorithm works by using Newton’s method to find a direction $(\bDelta z^ k, \bDelta t^ k, \bDelta s^ k, \bDelta v^ k, \bDelta w^ k)$ to move from the current solution $(\mb {z}^ k, \mb {t}^ k, \mb {s}^ k, \mb {v}^ k, \mb {w}^ k)$ toward a better solution:

\[ (\mb {z}^{k+1}, \mb {t}^{k+1}, \mb {s}^{k+1}, \mb {v}^{k+1}, \mb {w}^{k+1}) = (\mb {z}^ k, \mb {t}^ k, \mb {s}^ k, \mb {v}^ k, \mb {w}^ k) + \kappa (\bDelta z^ k, \bDelta t^ k, \bDelta s^ k, \bDelta v^ k, \bDelta w^ k) \]

$\kappa $ is the step length and is assigned a value as large as possible, but not so large that a $\mb {z}^{k+1}_ i$ or $\mb {s}^{k+1}_ i$ is too close to zero. You can control the step length with the KAPPA= option in the PROC QUANTREG statement.

The QUANTREG procedure implements a predictor-corrector variant of the primal-dual interior point algorithm. First, Newton’s method is used to find a direction $(\bDelta z^ k_{\mathit{aff}}, \bDelta t^ k_{\mathit{aff}}, \bDelta s^ k_{\mathit{aff}}, \bDelta v^ k_{\mathit{aff}}, \bDelta w^ k_{\mathit{aff}})$ in which to move. This is known as the affine step.

In iteration k, the affine step system that must be solved is

$\displaystyle  \bDelta z_{\mathit{aff}} + \bDelta v_{\mathit{aff}} = {\bdelta }_ b  $
$\displaystyle \bA \bDelta z_{\mathit{aff}} = {\bdelta }_ c  $
$\displaystyle \bA ^{\prime } \bDelta t_{\mathit{aff}} + \bDelta s_{\mathit{aff}} - \bDelta w_{\mathit{aff}} = {\bdelta }_ d  $
$\displaystyle \bS \bDelta z_{\mathit{aff}} + \bZ \bDelta s_{\mathit{aff}} = - \bZ \bS \mb {e}  $
$\displaystyle \bV \bDelta w_{\mathit{aff}} + \bW \bDelta z_{\mathit{aff}} = - \bV \bW \mb {e}  $

Therefore, the computations involved in solving the affine step are

$\displaystyle  \bTheta =\bS \bZ ^{-1} + \bW \bV ^{-1}  $
$\displaystyle \brho = \bTheta ^{-1} ({\bdelta }_ d + (\bS -\bW )\mb {e} - \bV ^{-1} \bW {\bdelta }_ b)  $
$\displaystyle \bDelta t_{\mathit{aff}} = (\bA \bTheta ^{-1} \bA ^{\prime })^{-1} ({\bdelta }_ c + \bA \brho )  $
$\displaystyle \bDelta z_{\mathit{aff}} = \bTheta ^{-1} \bA ^{\prime } \bDelta t_{\mathit{aff}} - \brho  $
$\displaystyle \bDelta v_{\mathit{aff}} = {\bdelta }_ b - \bDelta z_{\mathit{aff}}  $
$\displaystyle \bDelta w_{\mathit{aff}} = -\bW \mb {e} - \bV ^{-1} \bW \bDelta z_{\mathit{aff}}  $
$\displaystyle \bDelta s_{\mathit{aff}} = -\bS \mb {e} - \bZ ^{-1} \bS \bDelta z_{\mathit{aff}}  $
$\displaystyle (\mb {z}_{\mathit{aff}}, \mb {t}_{\mathit{aff}}, \mb {s}_{\mathit{aff}}, \mb {v}_{\mathit{aff}}, \mb {w}_{\mathit{aff}}) = (\mb {z}, \mb {t}, \mb {s}, \mb {v}, \mb {w}) + \kappa (\bDelta z_{\mathit{aff}}, \bDelta t_{\mathit{aff}}, \bDelta s_{\mathit{aff}}, \bDelta v_{\mathit{aff}}, \bDelta w_{\mathit{aff}})  $

where $\kappa $ is the step length as before.

The success of the affine step is gauged by calculating the complementarity of $\mb {z}^{\prime } \mb {s}$ and $\mb {v}^{\prime }\mb {w}$ at $(\mb {z}^ k_{\mathit{aff}}, \mb {t}^ k_{\mathit{aff}}, \mb {s}^ k_{\mathit{aff}}, \mb {v}^ k_{\mathit{aff}}, \mb {w}^ k_{\mathit{aff}})$ and comparing it with the complementarity at the starting point $(\mb {z}^ k, \mb {t}^ k, \mb {s}^ k, \mb {v}^ k, \mb {w}^ k)$. If the affine step was successful in reducing the complementarity by a substantial amount, the need for centering is not great, and a value close to zero is assigned to $\sigma $ in a second linear system (see following), which is used to determine a centering vector. If, however, the affine step was unsuccessful, then centering is deemed beneficial, and a value close to 1.0 is assigned to $\sigma $. In other words, the value of $\sigma $ is adaptively altered depending on progress made toward the optimum.

The following linear system is solved to determine a centering vector $(\bDelta z_ c, \bDelta t_ c, \bDelta s_ c, \bDelta v_ c, \bDelta w_ c)$ from $(\mb {z}_{\mathit{aff}}, \mb {t}_{\mathit{aff}}, \mb {s}_{\mathit{aff}}, \mb {v}_{\mathit{aff}}, \mb {w}_{\mathit{aff}})$:

$\displaystyle  \bDelta z_ c + \bDelta v_ c = \mb {0}  $
$\displaystyle \bA \bDelta z_ c = \mb {0} $
$\displaystyle \bA ^{\prime } \bDelta t_ c + \bDelta s_ c - \bDelta w_ c = \mb {0} $
$\displaystyle \bS \bDelta z_ c + \bZ \bDelta s_ c = - \bZ _{\mathit{aff}} \bS _{\mathit{aff}} \mb {e} + \sigma \bm {\mu } \mb {e} $
$\displaystyle \bV \bDelta w_ c + \bW \bDelta v_ c = - \bV _{\mathit{aff}} \bW _{\mathit{aff}} \mb {e} + \sigma \bm {\mu } \mb {e} $
$\displaystyle \text {where}~ ~ \zeta _{\mathit{start}} = \mb {z}^{\prime } \mb {s} + \mb {v}^{\prime } \mb {w}, ~ \text {complementarity at the start of the iteration} $
$\displaystyle \zeta _{\mathit{aff}} = \mb {z}_{\mathit{aff}}^{\prime } \mb {s}_{\mathit{aff}} + \mb {v}_{\mathit{aff}}^{\prime } \mb {w}_{\mathit{aff}}, ~ \text {the affine complementarity} $
$\displaystyle \mu = \zeta _{\mathit{aff}} / 2n, ~ \text {the average complementarity} $
$\displaystyle \sigma = (\zeta _{\mathit{aff}} / \zeta _{\mathit{start}})^3 $

Therefore, the computations involved in solving the centering step are

$\displaystyle  \brho =\bTheta ^{-1} (\sigma \mu (\bZ ^{-1} - \bV ^{-1}) \mb {e} - \bZ ^{-1} \bZ _{\mathit{aff}} \bS _{\mathit{aff}} \mb {e} + \bV ^{-1} \bV _{\mathit{aff}} \bW _{\mathit{aff}} \mb {e})  $
$\displaystyle \bDelta t_ c = (\bA \bTheta ^{-1} \bA ^{\prime })^{-1} \bA \brho  $
$\displaystyle \bDelta z_ c = \bTheta ^{-1} \bA ^{\prime } \bDelta t_ c - \brho  $
$\displaystyle \bDelta v_ c = -\bDelta z_ c  $
$\displaystyle \bDelta w_ c = \sigma \mu \bV ^{-1} \mb {e} - \bV ^{-1} \bV _{\mathit{aff}} \bW _{\mathit{aff}} \mb {e} - \bV ^{-1} \bW _{\mathit{aff}} \bDelta v_ c  $
$\displaystyle \bDelta s_ c = \sigma \mu \bZ ^{-1} \mb {e} - \bZ ^{-1} \bZ _{\mathit{aff}} \bS _{\mathit{aff}} \mb {e} - \bZ ^{-1} \bS _{\mathit{aff}} \bDelta z_ c  $


$\displaystyle  (\bDelta z, \bDelta t, \bDelta s, \bDelta v, \bDelta w) = (\bDelta z_{\mathit{aff}}, \bDelta t_{\mathit{aff}}, \bDelta s_{\mathit{aff}}, \bDelta v_{\mathit{aff}}, \bDelta w_{\mathit{aff}}) + (\bDelta z_{c}, \bDelta t_{c}, \bDelta s_{c}, \bDelta v_{c}, \bDelta w_{c})  $
$\displaystyle (\mb {z}^{k+1}, \mb {t}^{k+1}, \mb {s}^{k+1}, \mb {v}^{k+1}, \mb {w}^{k+1}) = (\mb {z}^ k, \mb {t}^ k, \mb {s}^ k, \mb {v}^ k, \mb {w}^ k) + \kappa (\bDelta z, \bDelta t, \bDelta s, \bDelta v, \bDelta w)  $

where, as before, $\kappa $ is the step length assigned a value as large as possible, but not so large that a $\mb {z}^{k+1}_ i$, $\mb {s}^{k+1}_ i$, $\mb {v}^{k+1}_ i$, or $\mb {w}^{k+1}_ i$ is too close to zero.

Although the predictor-corrector variant entails solving two linear systems instead of one, fewer iterations are usually required to reach the optimum. The additional overhead of the second linear system is small because the matrix $(\bA \bTheta ^{-1} \bA ^{\prime })$ has already been factorized in order to solve the first linear system.

You can specify the starting point with the INEST= option in the PROC statement. By default, the starting point is set to be the least squares estimate.

Smoothing Algorithm

To minimize the sum of the absolute residuals $D_{\mathit{LAR}}(\bbeta )$, the smoothing algorithm approximates the nondifferentiable function $D_{\mathit{LAR}}$ by the following smooth function, which is referred to as the Huber function:

\[ D_{\gamma }(\bbeta ) = \sum _{i=1}^ n H_{\gamma }(r_ i(\bbeta )) \]


\[  H_{\gamma }(t) = \left\{  \begin{array}{ll} t^2 \slash (2\gamma ) &  {\mbox{if }} |t| \leq \gamma \\ |t| - \gamma \slash 2 &  {\mbox{if }} |t| > \gamma \end{array} \right.  \]

Here $r_ i(\bbeta ) = y_ i - \mb {x}_ i^{\prime }\bbeta $, and the threshold $\gamma $ is a positive real number. The function $D_{\gamma }$ is continuously differentiable and a minimizer $\beta _{\gamma }$ of $D_{\gamma }$ is close to a minimizer $\hat\bbeta _{\mathit{LAR}}$ of $D_{\mathit{LAR}}(\bbeta )$ when $\gamma $ is close to zero.

The advantage of the smoothing algorithm as described in Madsen and Nielsen (1993) is that the $L_1$ solution $\hat\bbeta _{\mathit{LAR}}$ can be detected when $\gamma > 0$ is small. In other words, it is not necessary to let $\gamma $ converge to zero in order to find a minimizer of $D_{\mathit{LAR}}(\bbeta )$. The algorithm terminates before going through the entire sequence of values of $\gamma $ that are generated by the algorithm. Convergence is indicated by no change of the status of residuals $r_ i(\bbeta )$ as $\gamma $ goes through this sequence.

The smoothing algorithm extends naturally from $L_1$ regression to general quantile regression; see Chen (2007). The function

\[ D_{\rho _\tau }(\bbeta ) = \sum _{i=1}^ n \rho _{\tau }(y_ i-\mb {x}_ i^{\prime }\bbeta ) \]

can be approximated by the smooth function

\[ D_{\gamma ,\tau }(\bbeta ) = \sum _{i=1}^ n H_{\gamma ,\tau }(r_ i(\bbeta )) \]


\[  H_{\gamma ,\tau }(t) = \left\{  \begin{array}{lll} t(\tau -1)-{\frac12}(\tau -1)^2\gamma &  {\mbox{if }} t\le (\tau -1)\gamma \\ {\frac{t^2}{2\gamma }} &  {\mbox{if }} (\tau -1)\gamma \leq t \leq \tau \gamma \\ t\tau - {\frac12}\tau ^2\gamma &  {\mbox{if }} t\ge \tau \gamma \end{array} \right.  \]

The function $H_{\gamma ,\tau }$ is determined by whether $r_ i(\bbeta )\le (\tau -1)\gamma $, $r_ i(\bbeta )\ge \tau \gamma $, or $(\tau -1)\gamma \leq r_ i(\bbeta )\leq \tau \gamma $. These inequalities divide $\mb {R}^ p$ into subregions separated by the parallel hyperplanes $r_ i(\bbeta )=(\tau -1)\gamma $ and $r_ i(\bbeta )=\tau \gamma $. The set of all such hyperplanes is denoted by $B_{\gamma ,\tau }$:

\[  B_{\gamma , \tau } = \{  \bbeta \in \mb {R}^ p | \exists i: r_ i(\bbeta )=(\tau -1)\gamma \mbox{ or } r_ i(\bbeta )=\tau \gamma \}   \]

Define the sign vector $\mb {s}_\gamma (\bbeta ) = (s_1(\bbeta ),\ldots ,s_ n(\bbeta ))^{\prime }$ as

\[  s_ i = s_ i(\bbeta ) = \left\{  \begin{array}{rl} -1 &  {\mbox{if }} r_ i(\bbeta )\le (\tau -1)\gamma \\ 0 &  {\mbox{if }} (\tau -1)\gamma \leq r_ i(\bbeta ) \leq \tau \gamma \\ 1 &  {\mbox{if }} r_ i(\bbeta )\ge \tau \gamma \end{array} \right.  \]

and introduce

\[  w_ i = w_ i(\bbeta ) = 1- s_ i^2(\bbeta )  \]


$\displaystyle  $
$\displaystyle  $
$\displaystyle  H_{\gamma ,\tau }(r_ i(\bbeta )) = {\frac1{2\gamma }} w_ i r_ i^2(\bbeta )  $
$\displaystyle  $
$\displaystyle  $
$\displaystyle  + s_ i [ {\frac12} r_ i(\bbeta ) + {\frac14}(1-2\tau )\gamma + s_ i( r_ i(\bbeta )(\tau -{\frac12}) - {\frac14}( 1 - 2\tau + 2\tau ^2)\gamma )]  $


\[  D_{\gamma ,\tau }(\bbeta ) = {\frac1{2\gamma }} \mb {r}^{\prime }\bW _{\gamma ,\tau } \mb {r} + \mb {v}^{\prime }(s) \mb {r} + c(s)  \]

where $\bW _{\gamma ,\tau }$ is the diagonal $n \times n$ matrix with diagonal elements $w_ i(\bbeta )$, $\mb {v}^{\prime }(s) = (s_1((2\tau -1)s_1+1)\slash 2,\ldots ,s_ n((2\tau -1)s_ n +1)\slash 2)$, $c(s) = \sum [ {\frac14} (1-2\tau )\gamma s_ i - {\frac14} s_ i^2 ( 1-2\tau + 2\tau ^2 ) \gamma ]$, and $r(\bbeta ) = (r_1(\bbeta ),\ldots ,r_ n(\bbeta ))^{\prime }$.

The gradient of $D_{\gamma ,\tau }$ is given by

\[  D^{(1)}_{\gamma , \tau }(\bbeta ) = -\bA [ {\frac1\gamma } \bW _{\gamma ,\tau }(\bbeta ) r(\bbeta ) + v(s)]  \]

and for $\bbeta \in \mb {R}^ p \backslash B_{\gamma , \tau }$ the Hessian exists and is given by

\[  D^{(2)}_{\gamma ,\tau }(\bbeta ) = {\frac1\gamma }\bA \bW _{\gamma ,\tau }(\bbeta ) \bA ^{\prime } \]

The gradient is a continuous function in $\mb {R}^ p$, whereas the Hessian is piecewise constant.

Following Madsen and Nielsen (1993), the vector $\mb {s}$ is referred to as a $\gamma $-feasible sign vector if there exists $\bbeta \in \mb {R}^ p \backslash B_{\gamma , \tau }$ with $\mb {s}_\gamma (\bbeta ) = \mb {s}$. If $\mb {s}$ is $\gamma $-feasible, then $Q_ s$ is defined as the quadratic function $Q_ s(\balpha )$ that is derived from $ D_{\gamma ,\tau }(\bbeta )$ by substituting $\mb {s}$ for $\mb {s}_\gamma $. Thus, for any $\bbeta $ with $\mb {s}_\gamma =\mb {s}$,

\[  Q_ s(\balpha ) = {\frac12}(\balpha -\bbeta )^{\prime } D^{(2)}_{\gamma ,\tau }(\bbeta ) (\balpha -\bbeta ) + D^{(1)}_{\gamma ,\tau }(\bbeta ) (\balpha -\bbeta ) + D_{\gamma ,\tau }(\bbeta ) \]

In the domain $C_ s=\{ \alpha | \mb {s}_\gamma (\alpha ) = \mb {s}\} $

\[  D_{\gamma ,\tau }(\alpha ) = Q_ s(\balpha ) \]

For each $\gamma > 0$ and $\theta \in \mb {R}^ p$, there can be one or several corresponding quadratics $Q_ s$. If $\theta \notin B_{\gamma , \tau }$ then $Q_ s$ is characterized by $\theta $ and $\gamma $, but for $\theta \in B_{\gamma , \tau }$ the quadratic is not unique. Therefore, a reference

\[ (\gamma , \theta , \mb {s}) \]

determines the quadratic.

Again following Madsen and Nielsen (1993), let

$(\gamma , \theta , \mb {s})$ be a feasible reference if $\mb {s}$ is a $\gamma $-feasible sign vector with $\theta \in C_ s$, and

$(\gamma , \theta , \mb {s})$ be a solution reference if it is feasible and $\theta $ minimizes $D_{\gamma ,\tau }$.

The smoothing algorithm for minimizing $D_{\rho _\tau }$ is based on minimizing $D_{\gamma ,\tau }$ for a set of decreasing $\gamma $. For each new value of $\gamma $, information from the previous solution is used. Finally, when $\gamma $ is small enough, a solution can be found by the modified Newton-Raphson algorithm as stated by Madsen and Nielsen (1993):

find an initial solution reference $(\gamma , \bbeta _\gamma , \mb {s})$


decrease $\gamma $

find a solution reference $ (\gamma , \bbeta _\gamma , \mb {s})$

until $\gamma = 0$

$\bbeta _0$ is the solution.

By default, the initial solution reference is found by letting $\bbeta _\gamma $ be the least squares solution. Alternatively, you can specify the initial solution reference with the INEST= option in the PROC QUANTREG statement. Then $\gamma $ and $\mb {s}$ are chosen according to these initial values.

There are several approaches for determining a decreasing sequence of values of $\gamma $. The QUANTREG procedure uses a strategy by Madsen and Nielsen (1993). The computation involved is not significant comparing with the Newton-Raphson step. You can control the ratio of consecutive decreasing values of $\gamma $ with the RRATIO= suboption of the ALGORITHM= option in the PROC QUANTREG statement. By default,

\[  {\mbox{RRATIO }} = \left\{  \begin{array}{ll} 0.1 &  {\mbox{ if }} n \ge 10000 {\mbox{ and }} p\le 20 \\ 0.9 &  {\mbox{ if }} {\frac pn} \ge 0.1 {\mbox{ or }} \left\{  n \le 5000 {\mbox{ and }} p\ge 300 \right\}  \\ 0.5 &  {\mbox{ otherwise }} \end{array} \right.  \]

For the $L_1$ and quantile regression, it turns out that the smoothing algorithm is very efficient and competitive, especially for a fat data set—namely, when ${\frac pn} > 0.05$ and $\bA \bA ^{\prime }$ is dense. See Chen (2007) for a complete smoothing algorithm and details.