Previous Page | Next Page

The NLPC Nonlinear Optimization Solver

Optimization Techniques and Types of Problems Solved

The algorithms in the NLPC solver take advantage of the problem characteristics and automatically select an appropriate variant of an algorithm for a problem. Each of the optimization techniques implemented in the NLPC solver can handle unconstrained, bound constrained, linearly constrained, and nonlinearly constrained problems without your explicitly requesting which variant of the algorithm should be used. The NLPC solver is also designed for backward compatibility with PROC NLP, enabling you to migrate from PROC NLP to the more versatile PROC OPTMODEL modeling language. See Chapter 8, The OPTMODEL Procedure, for details. You can access several optimization techniques in PROC NLP or their modified versions through the new interface.


The NLPC solver implements the following optimization techniques:

  • conjugate gradient method

  • Newton-type method with line search

  • trust region method

  • quasi-Newton method (experimental)

These techniques assume the objective and constraint functions to be twice continuously differentiable. The derivatives of the objective and constraint functions, which are provided to the solver by using the PROC OPTMODEL modeling language, are computed using one of the following two methods:

  • automatic differentiation

  • finite-difference approximation

For details about automatic differentiation and finite-difference approximation, see the section Automatic Differentiation.

Previous Page | Next Page | Top of Page