Nonlinear Optimization and Related Subroutines |
The following list shows the syntax for nonlinear optimization subroutines. Subsequent sections describe each subroutine in detail.
conjugate gradient optimization method:
double-dogleg optimization method:
Nelder-Mead simplex optimization method:
Newton-Raphson optimization method:
Newton-Raphson ridge optimization method:
(dual) quasi-Newton optimization method:
quadratic optimization method:
trust-region optimization method:
The following list shows the syntax for optimization subroutines that use least squares methods. Subsequent sections describe each subroutine in detail.
hybrid quasi-Newton least squares methods:
Levenberg-Marquardt least squares method:
The following list shows the syntax for supplementary subroutines that are often used in conjunction with optimization subroutines. Subsequent sections describe each subroutine in detail.
approximate derivatives by finite differences:
feasible point subject to constraints:
Note: The names of the optional arguments can be used as keywords. For example, the following statements are equivalent:
call nlpnrr(rc,xr,"fun",x0,,,ter,,,"grad"); call nlpnrr(rc,xr,"fun",x0) tc=ter grd="grad";
All the optimization subroutines require at least two input arguments:
The NLPQUA subroutine requires the quad matrix argument, which specifies the symmetric matrix of the quadratic problem. The input can be dense or sparse.
Other optimization subroutines require the "fun" argument, which specifies a module that defines the objective function or functions. For least squares subroutines, the FUN module must return a column vector of length that corresponds to the values of the functions , each evaluated at the point . For other subroutines, the FUN module must return the value of the objective function evaluated at the point .
The argument x0 specifies a row vector that defines the number of parameters . If x0 is a feasible point, it represents a starting point for the iterative optimization process. Otherwise, a linear programming algorithm is called at the start of each optimization subroutine to replace the input x0 by a feasible starting point.
The other arguments that can be used as input are described in the following list. As indicated in the previous lists, not all input arguments apply to each subroutine.
Note that you can specify optional arguments with the keyword=argument syntax.
The following list describes each argument:
indicates an options vector that specifies details of the optimization process, such as particular updating techniques and whether the objective function is to be maximized instead of minimized. See the section Options Vector for details.
specifies a constraint matrix that defines lower and upper bounds for the parameters in addition to general linear equality and inequality constraints. For details, see the section Parameter Constraints.
specifies a vector of thresholds that correspond to the termination criteria tested in each iteration. See the section Termination Criteria for details.
specifies a vector of control parameters that can be used to modify the algorithms if the default settings do not complete the optimization process successfully. For details, see the section Control Parameters Vector.
specifies a module that replaces the subroutine used to print the iteration history and test the termination criteria. If the "ptit" module is specified, the matrix specified by the tc argument has no effect. See the section Termination Criteria for details.
specifies a module that computes the gradient vector, , at a given input point . See the section Objective Function and Derivatives for details.
specifies a module that computes the Hessian matrix, , at a given input point . See the section Objective Function and Derivatives for details.
specifies a module that computes the Jacobian matrix, , of the least squares functions at a given input point . See the section Objective Function and Derivatives for details.
specifies a module that computes general equality and inequality constraints. This is the method by which nonlinear constraints must be specified. For details, see the section Parameter Constraints.
specifies a module that computes the Jacobian matrix of first-order derivatives of the equality and inequality constraints specified by the NLC module. For details, see the section Parameter Constraints.
specifies the linear part of the quadratic optimization problem. See the section NLPQUA Call for details.
The modules that can be used as input arguments for the subroutines ("fun," "grd," "hes," "jac," "ptit," "nlc," and "jacnlc") accept only a single input parameter . You can provide more input parameters for these modules by using the GLOBAL clause. See the section Using the GLOBAL Clause for an example.
All the optimization subroutines return the following results:
The scalar return code rc indicates the reason for the termination of the optimization process. A return code rc indicates a successful termination that corresponds to one of the specified termination criteria. A return code rc indicates unsuccessful termination—that is, that the result xr is unreliable. See the section Definition of Return Codes for more details.
The row vector xr, which has length , contains the optimal point when rc .