The NLP Procedure

Hessian and CRP Jacobian Scaling

The rows and columns of the Hessian and crossproduct Jacobian matrix can be scaled when using the trust region, Newton-Raphson, double dogleg, and Levenberg-Marquardt optimization techniques. Each element $ G_{i,j}$, $ i,j=1,\ldots ,n,$ is divided by the scaling factor $ d_ i \times d_ j$, where the scaling vector $ d=(d_1,\ldots ,d_ n)$ is iteratively updated in a way specified by the HESCAL= i option, as follows:

  • No scaling is done (equivalent to $ d_ i=1$).

  • First iteration and each restart iteration:

    \[  d_ i^{(0)} = \sqrt {\max (|G^{(0)}_{i,i}|,\epsilon )}  \]
  • refer to MorĂ© (1978):

    \[  d_ i^{(k+1)} = \max \left( d_ i^{(k)},\sqrt {\max (|G^{(k)}_{i,i}|,\epsilon )} \right)  \]
  • refer to Dennis, Gay, and Welsch (1981):

    \[  d_ i^{(k+1)} = \max \left( 0.6 d_ i^{(k)},\sqrt {\max (|G^{(k)}_{i,i}|,\epsilon )} \right)  \]
  • $ d_ i$ is reset in each iteration:

    \[  d_ i^{(k+1)} = \sqrt {\max (|G^{(k)}_{i,i}|,\epsilon )}  \]

where $\epsilon $ is the relative machine precision or, equivalently, the largest double precision value that when added to 1 results in 1.