#
SAS/IML Software's Nonlinear Optimization Features

SAS/IML software provides a set of optimization subroutines for minimizing or maximizing a continuous nonlinear
function. The parameters of the function can be subject to boundary constraints, linear or nonlinear equality
constraints, and inequality constraints. The following set of optimization subroutines is available:

### Nonlinear Optimization Subroutines

CALL Subroutine |
Purpose |

NLPCG |
performs nonlinear optimization by the conjugate gradient method |

NLPDD |
performs nonlinear optimization by the double-dogleg method |

NLPNMS |
performs nonlinear optimization by the Nelder-Mead simplex method |

NLPNRA |
performs nonlinear optimization by the Newton-Raphson method |

NLPNRR |
performs nonlinear optimization by the Newton-Raphson ridge method |

NLPQN |
performs nonlinear optimization by the quasi-Newton method |

NLPQUA |
performs nonlinear optimization by the quadratic method |

NLPTR |
performs nonlinear optimization by the trust-region method |

SAS/IML also provides subroutines for solving nonlinear least squares problems. Least squares problems can usually be solved
more efficiently by the least squares subroutines than by the other optimization subroutines. The following set of subroutines
is available:

### Nonlinear Least Squares Subroutines

Subroutine |
Purpose |

NLPLM |
computes Levenberg-Marquardt least squares |

NLPHQN |
computes hybrid quasi-Newton least squares |

Each optimization technique requires a continuous objective function, and all optimization subroutines except the NLPNMS
subroutine require continuous first-order derivatives of the objective function. If you do not provide the derivatives of
the objective function, they are approximated by finite-difference formulas using the
NLPFDD subroutine. You can also use the NLPFDD subroutine to check the correctness of analytical derivative specifications.

Each optimization subroutine works iteratively. If the parameters are subject only to linear constraints, all optimization and
least squares techniques are feasible-point methods. If you do not provide a feasible starting point, the optimization methods
call the algorithm used in the
NLPFEA subroutine, which tries to compute a starting point that is feasible with respect to the boundary and linear constraints.

### Examples

Online Documentation Examples