-
ABSCONV=r
ABSTOL=r
-
specifies an absolute function convergence criterion by which minimization stops when
, where
is the vector of parameters in the optimization and
is the objective function. The default value of r is the negative square root of the largest double-precision value, which serves only as a protection against overflows.
-
ABSFCONV=r
ABSFTOL=r
-
specifies an absolute function difference convergence criterion. For all techniques except NMSIMP, termination requires a
small change of the function value in successive iterations,
where
denotes the vector of parameters that participate in the optimization and
is the objective function. The same formula is used for the NMSIMP technique, but
is defined as the vertex that has the lowest function value, and
is defined as the vertex that has the highest function value in the simplex. By default, ABSFCONV=0.
-
ABSGCONV=r
ABSGTOL=r
-
specifies an absolute gradient convergence criterion. Termination requires the maximum absolute gradient element to be small,
where
denotes the vector of parameters that participate in the optimization and
is the gradient of the objective function with respect to the jth parameter. This criterion is not used by the NMSIMP technique. The default value is r = 1E–5.
-
FCONV=r
FTOL=r
-
specifies a relative function convergence criterion. For all techniques except NMSIMP, termination requires a small relative
change of the function value in successive iterations,
where
denotes the vector of parameters that participate in the optimization and
is the objective function. The same formula is used for the NMSIMP technique, but
is defined as the vertex that has the lowest function value, and
is defined as the vertex that has the highest function value in the simplex. The default is r
, where FDIGITS is by default
and
is the machine precision.
-
GCONV=r
GTOL=r
-
specifies a relative gradient convergence criterion. For all techniques except CONGRA and NMSIMP, termination requires the
normalized predicted function reduction to be small,
where
denotes the vector of parameters that participate in the optimization,
is the objective function, and
is the gradient. For the CONGRA technique (in which a reliable Hessian estimate
is not available), the following criterion is used:
This criterion is not used by the NMSIMP technique. The default value is r = 1E–8.
-
MAXFUNC=n
MAXFU=n
-
specifies the maximum number of function calls in the optimization process. The default values are as follows, depending on
the optimization technique (which you can specify in the TECHNIQUE= option):
The optimization can terminate only after completing a full iteration. Therefore, the number of function calls that are actually
performed can exceed n.
-
MAXITER=n
MAXIT=n
-
specifies the maximum number of iterations in the optimization process. The default values are as follows, depending on the
optimization technique (which you can specify in the TECHNIQUE= option):
These default values also apply when n is specified as a missing value.
-
MAXTIME=r
-
specifies an upper limit of r seconds of CPU time for the optimization process. The time is checked only at the end of each iteration. Therefore, the actual
run time might be longer than r. By default, CPU time is not limited.
-
MINITER=n
MINIT=n
-
specifies the minimum number of iterations. If you request more iterations than are actually needed for convergence to a stationary
point, the optimization algorithms can behave strangely. For example, the effect of rounding errors can prevent the algorithm
from continuing for the required number of iterations. By default, MINITER=0.
-
TECHNIQUE=keyword
-
specifies the optimization technique to obtain maximum likelihood estimates. You can choose from the following techniques:
- CONGRA
-
performs a conjugate-gradient optimization.
- DBLDOG
-
performs a version of double-dogleg optimization.
- NEWRAP
-
performs a Newton-Raphson optimization that combines a line-search algorithm with ridging.
- NMSIMP
-
performs a Nelder-Mead simplex optimization.
- NONE
-
performs no optimization.
- NRRIDG
-
performs a Newton-Raphson optimization with ridging.
- QUANEW
-
performs a dual quasi-Newton optimization.
- TRUREG
-
performs a trust-region optimization.
By default, TECHNIQUE=NEWRAP.
For more information about these optimization methods, see the section Choosing an Optimization Algorithm in Chapter 19: Shared Concepts and Topics.