FOCUS AREAS

The NLP Procedure


Procedures in Online Documentation


The legacy NLP procedure offers a set of optimization techniques for minimizing or maximizing a continuous nonlinear function f(x) of n decision variables with boundary, general linear, and nonlinear equality and inequality constraints. The NLP procedure supports a number of algorithms for solving this problem that take advantage of special structures on f(x) and c_i(x). Two algorithms are especially designed for quadratic optimization problems, and two other algorithms are provided for the efficient solution of nonlinear least squares problems.

Nonlinear Programming

Input Data

Output Data

Optimizers

All optimization techniques in PROC NLP work iteratively. The quasi-Newton methods and the Nelder-Mead simplex method support nonlinear constraints on the decision variables.

When the variables are not subjected to nonlinear constraints, then the optimization and least squares techniques are feasible-point methods, moving iteratively from feasible point x(k) to a better feasible point x(k+1) by a step along the search direction s(k), k=1,2,3,.... If a feasible starting point x(0) is not provided, each of the optimization methods starts calling an algorithm that tries to compute a starting point that is feasible for boundary and linear constraints.

All the NLP optimization techniques require a continuous objective function f, and all except the Nelder-Mead simplex method require continuous first-order derivatives of the objective function f. Some of the techniques also require continuous second-order derivatives. There are three ways to compute the derivatives in PROC NLP: