The NLP Procedure

Restricting the Step Length

Almost all line-search algorithms use iterative extrapolation techniques which can easily lead them to (feasible) points where the objective function f is no longer defined. (e.g., resulting in indefinite matrices for ML estimation) or difficult to compute (e.g., resulting in floating point overflows). Therefore, PROC NLP provides options restricting the step length \alpha or trust region radius \delta, especially during the first main iterations.

The inner product  g^ts of the gradient  g and the search direction  s is the slope of f(\alpha) = f(x + \alpha s) along the search direction  s. The default starting value \alpha^{(0)} = \alpha^{(k,0)} in each line-search algorithm (\min_{\alpha \gt 0} f(x + \alpha s)) during the main iteration  k is computed in three steps:

  1. The first step uses either the difference  df=| f^{(k)} - f^{(k-1)}| of the function values during the last two consecutive iterations or the final step length value \alpha^{\_} of the last iteration  k-1 to compute a first value of \alpha_1^{(0)}.

This value of \alpha_1^{(0)} can be too large and lead to a difficult or impossible function evaluation, especially for highly nonlinear functions such as the EXP function.

  • Using the DAMPSTEP=r option:
    \alpha_1^{(0)} = \min (1,r \alpha^{\_})
    The initial value for the new step length can be no larger than  r times the final step length \alpha^{\_} of the previous iteration. The default value is  r=2.
  • During the first five iterations, the second step enables you to reduce \alpha_1^{(0)} to a smaller starting value \alpha_2^{(0)} using the INSTEP= r option:
    \alpha_2^{(0)} = \min (\alpha_1^{(0)},r)
    After more than five iterations, \alpha_2^{(0)} is set to \alpha_1^{(0)}.
  • The third step can further reduce the step length by
    \alpha_3^{(0)} = \min (\alpha_2^{(0)},\min(10,u))
    where  u is the maximum length of a step inside the feasible region.
  • The INSTEP= r option lets you specify a smaller or larger radius \delta of the trust region used in the first iteration of the trust region, double dogleg, and Levenberg-Marquardt algorithms. The default initial trust region radius \delta^{(0)} is the length of the scaled gradient (Moré 1978). This step corresponds to the default radius factor of  r=1. In most practical applications of the TRUREG, DBLDOG, and LEVMAR algorithms, this choice is successful. However, for bad initial values and highly nonlinear objective functions (such as the EXP function), the default start radius can result in arithmetic overflows. If this happens, you may try decreasing values of INSTEP= r,  0 \lt r \lt 1, until the iteration starts successfully. A small factor  r also affects the trust region radius \delta^{(k+1)} of the next steps because the radius is changed in each iteration by a factor  0 \lt c \le 4, depending on the ratio \rho expressing the goodness of quadratic function approximation. Reducing the radius \delta corresponds to increasing the ridge parameter \lambda, producing smaller steps directed more closely toward the (negative) gradient direction.

    Previous Page | Next Page | Top of Page