The NLP Procedure

Hessian and CRP Jacobian Scaling

The rows and columns of the Hessian and crossproduct Jacobian matrix can be scaled when using the trust region, Newton-Raphson, double dogleg, and Levenberg-Marquardt optimization techniques. Each element  g_{i,j},  i,j=1, ... ,n, is divided by the scaling factor  d_i x d_j, where the scaling vector  d=(d_1, ... ,d_n) is iteratively updated in a way specified by the HESCAL=i option, as follows:

 i = 0
No scaling is done (equivalent to  d_i=1).
 i \neq 0
First iteration and each restart iteration:
d_i^{(0)} = \sqrt{\max(| g^{(0)}_{i,i}|,\epsilon)}
 i = 1
refer to Moré (1978):
d_i^{(k+1)} = \max ( d_i^{(k)},\sqrt{\max(| g^{(k)}_{i,i}|,\epsilon)} )
 i = 2
refer to Dennis, Gay, and Welsch (1981):
d_i^{(k+1)} = \max ( 0.6 d_i^{(k)},\sqrt{\max(| g^{(k)}_{i,i}|,\epsilon)} )
 i = 3
 d_i is reset in each iteration:
d_i^{(k+1)} = \sqrt{\max(| g^{(k)}_{i,i}|,\epsilon)}
where \epsilon is the relative machine precision or, equivalently, the largest double precision value that when added to 1 results in 1.

Previous Page | Next Page | Top of Page