Language Reference

NLPCG Call

nonlinear optimization by conjugate gradient method

CALL NLPCG( rc, xr, "fun", x0 <,opt, blc, tc, par, "ptit", "grd">);

See the section "Nonlinear Optimization and Related Subroutines" for a listing of all NLP subroutines. See Chapter 11 for a description of the inputs to and outputs of all NLP subroutines.

The NLPCG subroutine requires function and gradient calls; it does not need second-order derivatives. The gradient vector contains the first derivatives of the objective function f with respect to the parameters x_1, ... ,x_n, as follows:
g(x) = \nabla f(x)    = ( \frac{\partial f}{\partial x_j} )
If you do not specify an IML module with the "grd" argument, the first-order derivatives are approximated by finite difference formulas using only function calls. The NLPCG algorithm can require many function and gradient calls, but it requires less memory than other subroutines for unconstrained optimization. In general, many iterations are needed to obtain a precise solution, but each iteration is computationally inexpensive. You can specify one of four update formulas for generating the conjugate directions with the fourth element of the opt input argument.

Value of opt[4] Update Method
1Automatic restart method of Powell (1977) and Beale (1972).
 This is the default.
2Fletcher-Reeves update (Fletcher 1987)
3Polak-Ribiere update (Fletcher 1987)
4Conjugate-descent update of Fletcher (1987)

The NLPCG subroutine is useful for optimization problems with large n. For the unconstrained or boundary constrained case, the NLPCG method needs only order n bytes of working memory, whereas the other optimization methods require order n^2 bytes of working memory. During n successive iterations, uninterrupted by restarts or changes in the working set, the conjugate gradient algorithm computes a cycle of n conjugate search directions. In each iteration, a line search is done along the search direction to find an approximate optimum of the objective function. The default line-search method uses quadratic interpolation and cubic extrapolation to obtain a step size \alpha that satisfies the Goldstein conditions. One of the Goldstein conditions can be violated if the feasible region defines an upper limit for the step size. You can specify other line-search algorithms with the fifth element of the opt argument.

For an example of the NLPCG subroutine, see the section "Constrained Betts Function".

Previous Page | Next Page | Top of Page