Overview

The IML procedure offers a set of optimization subroutines for minimizing or maximizing a continuous nonlinear function $f = f(x)$ of $n$ parameters, where $x = (x_1,\ldots ,x_ n)^ T$. The parameters can be subject to boundary constraints and linear or nonlinear equality and inequality constraints. The following set of optimization subroutines is available:

NLPCG

Conjugate Gradient Method

NLPDD

Double Dogleg Method

NLPNMS

Nelder-Mead Simplex Method

NLPNRA

Newton-Raphson Method

NLPNRR

Newton-Raphson Ridge Method

NLPQN

(Dual) Quasi-Newton Method

NLPQUA

Quadratic Optimization Method

NLPTR

Trust-Region Method

The following subroutines are provided for solving nonlinear least squares problems:

NLPLM

Levenberg-Marquardt Least Squares Method

NLPHQN

Hybrid Quasi-Newton Least Squares Methods

A least squares problem is a special form of minimization problem where the objective function is defined as a sum of squares of other (nonlinear) functions.

\[  f(x) = \frac{1}{2} \{  f_1^2(x) + \cdots + f_ m^2(x) \}   \]

Least squares problems can usually be solved more efficiently by the least squares subroutines than by the other optimization subroutines.

The following subroutines are provided for the related problems of computing finite difference approximations for first- and second-order derivatives and of determining a feasible point subject to boundary and linear constraints:

NLPFDD

Approximate Derivatives by Finite Differences

NLPFEA

Feasible Point Subject to Constraints

Each optimization subroutine works iteratively. If the parameters are subject only to linear constraints, all optimization and least squares techniques are feasible-point methods; that is, they move from feasible point $x^{(k)}$ to a better feasible point $x^{(k+1)}$ by a step in the search direction $s^{(k)}$, $k=1,2,3,\ldots $. If you do not provide a feasible starting point $x^{(0)}$, the optimization methods call the algorithm used in the NLPFEA subroutine, which tries to compute a starting point that is feasible with respect to the boundary and linear constraints.

The NLPNMS and NLPQN subroutines permit nonlinear constraints on parameters. For problems with nonlinear constraints, these subroutines do not use a feasible-point method; instead, the algorithms begin with whatever starting point you specify, whether feasible or infeasible.

Each optimization technique requires a continuous objective function $f = f(x)$, and all optimization subroutines except the NLPNMS subroutine require continuous first-order derivatives of the objective function $f$. If you do not provide the derivatives of $f$, they are approximated by finite-difference formulas. You can use the NLPFDD subroutine to check the correctness of analytical derivative specifications.

Most of the results obtained from the IML procedure optimization and least squares subroutines can also be obtained by using the OPTMODEL procedure or the NLP procedure in SAS/OR software.

The advantages of the IML procedure are as follows:

  • You can use matrix algebra to specify the objective function, nonlinear constraints, and their derivatives in IML modules.

  • The IML procedure offers several subroutines that can be used to specify the objective function or nonlinear constraints, many of which would be very difficult to write for the NLP procedure.

  • You can formulate your own termination criteria by using the ptit module argument.

The advantages of the NLP procedure are as follows:

  • Although identical optimization algorithms are used, the NLP procedure can be much faster because of the interactive and more general nature of the IML product.

  • Analytic first- and second-order derivatives can be computed with a special compiler.

  • Additional optimization methods are available in the NLP procedure that do not fit into the framework of this package.

  • Data set processing is much easier than in the IML procedure. You can save results in output data sets and use them in subsequent runs.

  • The printed output contains more information.