Procedures in Online Documentation
In nonlinear optimization, you try to minimize or maximize an objective function that can be subject to a set of constraints. The objective function is typically nonlinear in terms of the decision variables. If the problem is constrained, it can be subject to bound, linear, or nonlinear constraints. In general, you can classify nonlinear optimization (minimization or maximization) problems into the following four categories:
where
: is the nonlinear objective function
are the functions of general nonlinear equality and inequality constraints
are the constant terms of the constraints, also referred to as the right-hand side (RHS)
and are lower and upper bounds on the decision variable
where . If is not present, you have a bound-constrained problem. If it is also true that and for all , you have an unconstrained problem in which can take values in the entire space.
These different problem classes typically call for different types of algorithms to solve them. The algorithms that are devised specifically to solve a particular class of problem might not be suitable for solving problems in a different class. For example, there are algorithms that specifically solve unconstrained and bound-constrained problems. For linearly constrained problems, the fact that the Jacobian of the constraints is constant enables you to design algorithms that are more efficient for that class.
The algorithms in the NLPC solver take advantage of the problem characteristics and automatically select an appropriate variant of an algorithm for a problem. Each of the optimization techniques implemented in the NLPC solver can handle unconstrained, bound-constrained, linearly constrained, and nonlinearly constrained problems without your explicitly requesting which variant of the algorithm should be used. The NLPC solver is also designed for backward compatibility with PROC NLP, enabling you to migrate from PROC NLP to the more versatile PROC OPTMODEL modeling language.
The NLPC solver provides the following solution techniques:
trust region method
Newton-Raphson method with line search
conjugate gradient method
quasi-Newton method (experimental)
These techniques assume the objective and constraint functions to be twice continuously differentiable. The derivatives of the objective and constraint functions, which are provided to the solver by using the PROC OPTMODEL modeling language, are computed using one of the following two methods: