Language Reference

LAV Call

performs linear least absolute value regression by solving the l_1 norm minimization problem

CALL LAV( rc, xr, a, b\lt, <x0<, opt>);

The LAV subroutine returns the following values:



rc
is a scalar return code indicating the reason for optimization termination.

rc Termination
0Successful
1Successful, but approximate covariance matrix and standard errors cannot be computed
-1 or -3Unsuccessful: error in the input arguments
-2Unsuccessful: matrix {a} is rank deficient ({\rm rank}({a})\lt n)
-4Unsuccessful: maximum iteration limit exceeded
-5Unsuccessful: no solution found for ill-conditioned problem


xr
specifies a vector or matrix with n columns. If the optimization process is not successfully completed, xr is a row vector with n missing values. If termination is successful and the opt[3] option is not set, xr is the vector with the optimal estimate, x^*. If termination is successful and the opt[3] option is specified, xr is an (n+2) x n matrix that contains the optimal estimate, x^*, in the first row, the asymptotic standard errors in the second row, and the n x n covariance matrix of parameter estimates in the remaining rows.

The inputs to the LAV subroutine are as follows:



a
specifies an m x n matrix {a} with m \geq n and full column rank, {\rm rank}({a})=n. If you want to include an intercept in the model, you must include a column of ones in the matrix {a}.

b
specifies the m x 1 vector {b}.

x0
specifies an optional n x 1 vector that specifies the starting point of the optimization.

opt
is an optional vector used to specify options.

opt[1] specifies the maximum number maxi of outer iterations (this corresponds to the number of changes of the Huber parameter \gamma). The default is {maxi}=\min(100,10n). (The number of inner iterations is restricted by an internal threshold. If the number of inner iterations exceeds this threshold, a new outer iteration is started with an increased value of \gamma.)

opt[2] specifies the amount of printed output. Higher values request additional output and include the output of lower values.

opt[2] Termination
0no output is printed
1error and warning messages are printed
2the iteration history is printed (this is the default)
3the n least squares (l_2 norm) estimates are printed if no starting point is specified; the l_1 norm estimates are printed; if opt[3] is set, the estimates are printed together with the asymptotic standard errors
4the n x n approximate covariance matrix of parameter estimates is printed if opt[3] is set
5the residual and predicted values for all m rows (equations) of {a} are printed


opt[3] specifies which estimate of the variance of the median of nonzero residuals is to be used as a factor for the approximate covariance matrix of parameter estimates and for the approximate standard errors (ASE). If opt[3]=0, the McKean-Schrader (1987) estimate is used, and if opt[3]\gt, the Cox-Hinkley (1974) estimate, with v=opt[3], is used. The default is opt[3]=-1 or opt[3]=., which means that the covariance matrix is not computed.

opt[4] specifies whether a computationally expensive test for necessary and sufficient optimality of the solution x is executed. The default is opt[4]=0 or opt[4]=., which means that the convergence test is not performed.

Missing values are not permitted in the a or b argument. The x0 argument is ignored if it contains any missing values. Missing values in the opt argument cause the default value to be used.



The Least Absolute Values (LAV) subroutine is designed for solving the unconstrained linear l_1 norm minimization problem,
\min_{{x}} l_1({x}) \hspace*{.25in} {where} \hspace*{.25in}   l_1({x}) = \vert {a}{x}- {b}\vert _1    = \sum_{i=1}^m | \sum_{j=1}^n a_{ij} x_j - b_i    |
for m equations with n (unknown) parameters {x}=(x_1, ... ,x_n). This is equivalent to estimating the unknown parameter vector, {x}, by least absolute value regression in the model
{b}= {a}{x}+ {{{{{\epsilon}}}}}
where {b} is the vector of n observations, {a} is the design matrix, and {{{{{\epsilon}}}}} is a random error term.

An algorithm by Madsen and Nielsen (1993) is used, which can be faster for large values of m and n than the Barrodale and Roberts (1974) algorithm. The current version of the algorithm assumes that {a} has full column rank. Also, constraints cannot be imposed on the parameters in this version.

The l_1 norm minimization problem is more difficult to solve than the least squares (l_2 norm) minimization problem because the objective function of the l_1 norm problem is not continuously differentiable (the first derivative has jumps). A function that is continuous but not continuously differentiable is called nonsmooth. Using PROC NLP and the IML nonlinear optimization subroutines, you can obtain the estimates in linear and nonlinear l_1 norm estimation (even subject to linear or nonlinear constraints) as long as the number of parameters, n, is small. Using the nonlinear optimization subroutines, there are two ways to solve the nonlinear l_p norm, p \geq 1, problem:

Both approaches are successful only for a small number of parameters and good initial estimates. If you cannot supply good initial estimates, the optimal results of the corresponding nonlinear least squares (l_2 norm) estimation can provide fairly good initial estimates.

Gonin and Money (1989, pp. 44 - 45) show that the nonlinear l_1 norm estimation problem
\min_{{x}} \sum_{i=1}^m | f_i({x})|
can be reformulated as a linear optimization problem with nonlinear constraints in the following ways. For linear functions f_i({x}) = \sum_{j=1}^n (a_{ij} x_j - b_i), i=1, ... ,m, you obtain linearly constrained linear optimization problems, for which the number of variables and constraints is on the order of the number of observations, m. The advantage that the algorithm by Madsen and Nielsen (1993) has over the Barrodale and Roberts (1974) algorithm is that its computational cost increases only linearly with m, and it can be faster for large values of m.

In addition to computing an optimal solution {x}^* that minimizes l_1({x}), you can also compute approximate standard errors and the approximate covariance matrix of {x}^*. The standard errors can be used to compute confidence limits.

The following example is the same one used for illustrating the LAV procedure by Lee and Gentle (1986). {a} and {b} are as follows:
{a}= [ 1 & 0 \    1 & 1 \    1 & -1 \    1 & -1 \    1 & 2 \    1 & 2    ] \hspace*{0.25in}   {b}= [ 1 \    2 \    1 \    -1 \    2 \    4    ]
The following code specifies the matrix A, the vector B, and the options vector OPT. The options vector specifies that all output is printed (opt[2]=5), that the asymptotic standard errors and covariance matrix are computed based on the McKean-Schrader (1987) estimate \lambda of the variance of the median (opt[3]=0), and that the convergence test should be performed (opt[4]=1).
  
    a = { 0,  1, -1, -1,  2,  2 }; 
    m = nrow(a); 
    a = j(m,1,1.) || a; 
    b = { 1,  2,  1, -1,  2,  4 }; 
  
    opt= { . 5  0 1 }; 
    call lav(rc,xr,a,b,,opt);
 
The first part of the printed output refers to the least squares solution, which is used as the starting point. The estimates of the largest and smallest nonzero eigenvalues of {a}^' {a} give only an idea of the magnitude of these values, and they can be very crude approximations.

The second part of the printed output shows the iteration history.

The third part of the printed output shows the l_1 norm solution (first row) together with asymptotic standard errors (second row) and the asymptotic covariance matrix of parameter estimates (the ASEs are the square roots of the diagonal elements of this covariance matrix).

The last part of the printed output shows the predicted values and residuals, as in Lee and Gentle (1986).

Previous Page | Next Page | Top of Page