|The QUANTREG Procedure|
This example illustrates and compares the three algorithms for regression estimation available in the QUANTREG procedure. The simplex algorithm is the default because of its stability. Although this algorithm is slower than the interior point and smoothing algorithms for large data sets, the difference is not as significant for data sets with fewer than 5,000 observations and 50 variables. The simplex algorithm can also compute the entire quantile process, which is shown in Example 73.2.
The following statements generate 1,000 random observations. The first 950 observations are from a linear model, and the last 50 observations are significantly biased in the -direction. In other words, 5 of the observations are contaminated with outliers.
data a (drop=i); do i=1 to 1000; x1=rannor(1234); x2=rannor(1234); e=rannor(1234); if i > 950 then y=100 + 10*e; else y=10 + 5*x1 + 3*x2 + 0.5 * e; output; end; run;
proc quantreg data=a; model y = x1 x2; run;
Output 73.1.1 displays model information and summary statistics for variables in the model. It indicates that the simplex algorithm is used to compute the optimal solution and the rank method is used to compute confidence intervals of the parameters.
By default, the QUANTREG procedure fits a median regression model. This is indicated by the quantile value 0.5 in Output 73.1.2, which also displays the objective function value and the predicted value of the response at the means of the covariates.
Output 73.1.3 displays parameter estimates and confidence limits. These estimates are reasonable, which indicates that median regression is robust to the 50 outliers.
|Number of Independent Variables||2|
|Number of Observations||1000|
|Method for Confidence Limits||Inv_Rank|
The following statements refit the model by using the interior point algorithm:
proc quantreg algorithm=interior(tolerance=1e-6) ci=none data=a; model y = x1 x2 / itprint nosummary; run;
The TOLERANCE= option specifies the stopping criterion for convergence of the interior point algorithm, which is controlled by the duality gap. Although the default criterion is 1E8, the value 1E6 is often sufficient. The ITPRINT option requests the iteration history for the algorithm. The option CI=NONE suppresses the computation of confidence limits, and the option NOSUMMARY suppresses the table of summary statistics.
Output 73.1.4 displays model fit information.
Output 73.1.5 displays the iteration history of the interior point algorithm. Note that the duality gap is less than 1E6 in the final iteration. The table also provides the number of iterations, the number of corrections, the primal step length, the dual step length, and the objective function value at each iteration.
|Iteration History of Interior Point Algorithm|
Output 73.1.6 displays the parameter estimates obtained with the interior point algorithm, which are identical to those obtained with the simplex algorithm.
proc quantreg algorithm=smooth(rratio=.5) ci=none data=a; model y = x1 x2 / itprint nosummary; run;
The RRATIO= option controls the reduction speed of the threshold. Output 73.1.7 displays the model fit information.
Output 73.1.8 displays the iteration history of the smoothing algorithm. The threshold controls the convergence. Note that the thresholds decrease by a factor of at least 0.5, the value specified with the RRATIO= option. The table also provides the number of iterations, the number of factorizations, the number of full updates, the number of partial updates, and the objective function value in each iteration. For details concerning the smoothing algorithm, refer to Chen (2007).
|Iteration History of Smoothing Algorithm|
Output 73.1.9 displays the parameter estimates obtained with the smoothing algorithm, which are identical to those obtained with the simplex and interior point algorithms. All three algorithms should have the same parameter estimates unless the problem does not have a unique solution.
The interior point algorithm and the smoothing algorithm offer better performance than the simplex algorithm for large data sets. Refer to Chen (2004) for more details on choosing an appropriate algorithm on the basis of data set size. All three algorithms should have the same parameter estimates, unless the optimization problem has multiple solutions.
Copyright © SAS Institute, Inc. All Rights Reserved.