LAV Call
performs linear least absolute value regression by
solving the norm minimization problem
- CALL LAV( rc, xr, , , <x0<,
opt>);
The LAV subroutine returns the following values:
- rc
- is a scalar return code indicating the
reason for optimization termination.
rc
|
Termination
|
0 | Successful |
1 | Successful, but approximate covariance matrix and
standard errors cannot be computed |
-1 or -3 | Unsuccessful: error in the input arguments |
-2 | Unsuccessful: matrix is rank deficient () |
-4 | Unsuccessful: maximum iteration limit exceeded |
-5 | Unsuccessful: no solution found for ill-conditioned problem |
- xr
- specifies a vector or matrix with columns.
If the optimization process is not successfully completed,
xr is a row vector with missing values.
If termination is successful and the opt[3] option is not
set, xr is the vector with the optimal estimate, .
If termination is successful and the opt[3] option is
specified, xr is an matrix that contains
the optimal estimate, , in the first row, the asymptotic
standard errors in the second row, and the
covariance matrix of parameter estimates in the remaining rows.
The inputs to the LAV subroutine are as follows:
- specifies an matrix with
and full column rank, .
If you want to include an intercept in the model,
you must include a column of ones in the matrix .
- specifies the vector .
- specifies an optional vector that
specifies the starting point of the optimization.
- opt
- is an optional vector used to specify options.
opt[1] specifies the maximum number maxi
of outer iterations (this corresponds to the number
of changes of the Huber parameter ).
The default is .
(The number of inner iterations is
restricted by an internal threshold.
If the number of inner iterations exceeds this threshold, a new
outer iteration is started with an increased value of .)
opt[2] specifies the amount of printed output.
Higher values request additional output
and include the output of lower values.
opt[2]
|
Termination
|
0 | no output is printed |
1 | error and warning messages are printed |
2 | the iteration history is printed (this is the default) |
3 | the least squares ( norm) estimates are printed
if no starting point is specified; the norm estimates
are printed; if opt[3] is set, the estimates are
printed together with the asymptotic standard errors |
4 | the approximate covariance matrix of
parameter estimates is printed if opt[3] is set |
5 | the residual and predicted values for all
rows (equations) of are printed |
opt[3] specifies which estimate of the variance of
the median of nonzero residuals is to be used as a factor
for the approximate covariance matrix of parameter
estimates and for the approximate standard errors (ASE).
If opt, the McKean-Schrader (1987) estimate
is used, and if opt, the Cox-Hinkley
(1974) estimate, with opt[3], is used.
The default is opt or opt,
which means that the covariance matrix is not computed.
opt[4] specifies whether a computationally
expensive test for necessary and sufficient
optimality of the solution is executed.
The default is opt or opt,
which means that the convergence test is not performed.
Missing values are not permitted in the
or
argument.
The
argument is ignored if it contains any missing values.
Missing values in the
opt argument
cause the default value to be used.
The Least Absolute Values (LAV) subroutine is designed for
solving the unconstrained linear
norm minimization problem,
for
equations with
(unknown)
parameters
.
This is equivalent to estimating the unknown parameter vector,
, by least absolute value regression in the model
where
is the vector of
observations,
is
the design matrix, and
is a random error term.
An algorithm by Madsen and Nielsen (1993) is used,
which can be faster for large values of
and
than the Barrodale and Roberts (1974) algorithm.
The current version of the algorithm
assumes that
has full column rank.
Also, constraints cannot be imposed
on the parameters in this version.
The
norm minimization problem is more difficult to
solve than the least squares (
norm) minimization problem
because the objective function of the
norm problem is not
continuously differentiable (the first derivative has jumps).
A function that is continuous but not continuously
differentiable is called
nonsmooth.
Using PROC NLP and the IML nonlinear optimization subroutines,
you can obtain the estimates in linear and nonlinear
norm
estimation (even subject to linear or nonlinear constraints)
as long as the number of parameters,
, is small.
Using the nonlinear optimization subroutines, there are two
ways to solve the nonlinear
norm,
, problem:
- For small values of , you can implement the Nelder-Mead
simplex algorithm with the NLPNMS subroutine to solve
the minimization problem in its original specification.
The Nelder-Mead simplex algorithm does not assume a
smooth objective function, does not take advantage
of any derivatives, and therefore does not require
continuous differentiability of the objective function.
See the section "NLPNMS Call" for details.
- Gonin and Money (1989) describe how an original
norm estimation problem can be modified to an equivalent
optimization problem with nonlinear constraints which
has a simple differentiable objective function.
You can invoke the NLPQN subroutine, which implements
a quasi-Newton algorithm, to solve the nonlinearly
constrained norm optimization problem.
See the section "NLPQN Call" for details about the NLPQN subroutine.
Both approaches are successful only for a small
number of parameters and good initial estimates.
If you cannot supply good initial estimates, the optimal
results of the corresponding nonlinear least squares (
norm) estimation can provide fairly good initial estimates.
Gonin and Money (1989, pp. 44 - 45) show that
the nonlinear
norm estimation problem
can be reformulated as a linear optimization problem
with nonlinear constraints in the following ways.
-
is a linear optimization problem with nonlinear
inequality constraints in variables and . -
is a linear optimization problem with
nonlinear equality constraints in
variables , , and .
For linear functions
,
, you obtain linearly constrained linear
optimization problems, for which the number of variables and
constraints is on the order of the number of observations,
.
The advantage that the algorithm by Madsen and Nielsen
(1993) has over the Barrodale and Roberts (1974) algorithm
is that its computational cost increases only linearly
with
, and it can be faster for large values of
.
In addition to computing an optimal solution
that
minimizes
, you can also compute approximate standard
errors and the approximate covariance matrix of
.
The standard errors can be used to compute confidence limits.
The following example is the same one used for
illustrating the LAV procedure by Lee and Gentle (1986).
and
are as follows:
The following code specifies the matrix A,
the vector B, and the options vector OPT.
The options vector specifies that all output is printed
(
opt), that the asymptotic standard
errors and covariance matrix are computed based on
the McKean-Schrader (1987) estimate
of the
variance of the median (
opt), and that the
convergence test should be performed (
opt).
a = { 0, 1, -1, -1, 2, 2 };
m = nrow(a);
a = j(m,1,1.) || a;
b = { 1, 2, 1, -1, 2, 4 };
opt= { . 5 0 1 };
call lav(rc,xr,a,b,,opt);
The first part of the printed output refers to the
least squares solution, which is used as the starting point.
The estimates of the largest and smallest nonzero eigenvalues
of
give only an idea of the magnitude
of these values, and they can be very crude approximations.
The second part of the printed output shows the iteration history.
The third part of the printed output shows the norm
solution (first row) together with asymptotic standard
errors (second row) and the asymptotic covariance matrix
of parameter estimates (the ASEs are the square roots
of the diagonal elements of this covariance matrix).
The last part of the printed output shows the predicted
values and residuals, as in Lee and Gentle (1986).