Language Reference |
Optimization Subroutines |
Conjugate Gradient Optimization Method |
|
Double Dogleg Optimization Method |
|
Nelder-Mead Simplex Optimization Method |
|
Newton-Raphson Optimization Method |
|
Newton-Raphson Ridge Optimization Method |
|
(Dual) Quasi-Newton Optimization Method |
|
Quadratic Optimization Method |
|
Trust-Region Optimization Method |
|
Least Squares Subroutines |
Hybrid Quasi-Newton Least Squares Methods |
|
Levenberg-Marquardt Least Squares Method |
|
Supplementary Subroutines |
Approximate Derivatives by Finite Differences |
|
Feasible Point Subject to Constraints |
|
Note: The names of the optional arguments can be used as keywords. For example, the following statements are equivalent:
call nlpnrr(rc,xr,"fun",x0,,,ter,,,"grad"); call nlpnrr(rc,xr,"fun",x0) tc=ter grd="grad";
All the optimization subroutines require at least two input arguments.
The other arguments that can be used as input are described in the following list. As indicated in Table 20.1, not all input arguments apply to each subroutine.
Note that you can specify optional arguments with the keyword=argument syntax.
All the optimization subroutines return the following results:
Copyright © 2009 by SAS Institute Inc., Cary, NC, USA. All rights reserved.