The NLMIXED Procedure

Covariance Matrix

The estimated covariance matrix of the parameter estimates is computed as the inverse Hessian matrix, and for unconstrained problems it should be positive definite. If the final parameter estimates are subjected to $n_{\mathit{act}} > 0$ active linear inequality constraints, the formulas of the covariance matrices are modified similar to Gallant (1987) and Cramer (1986, p. 38) and additionally generalized for applications with singular matrices.

There are several steps available that enable you to tune the rank calculations of the covariance matrix.

  1. You can use the ASINGULAR= , MSINGULAR= , and VSINGULAR= options to set three singularity criteria for the inversion of the Hessian matrix $\mb{H}$. The singularity criterion used for the inversion is

    \[ |d_{j,j}| \le \max (\mbox{ASING}, \mbox{VSING} * |H_{j,j}|, \mbox{MSING} * \max (|H_{1,1}|,\ldots ,|H_{n,n}|)) \]

    where $d_{j,j}$ is the diagonal pivot of the matrix $\mb{H}$, and ASING, VSING, and MSING are the specified values of the ASINGULAR= , VSINGULAR= , and MSINGULAR= options, respectively. The default values are as follows:

    • ASING: the square root of the smallest positive double-precision value

    • MSING: 1E–12 if you do not specify the SINGHESS= option and $\max (10 \epsilon ,1\mr{E}-4 \times \mbox{SINGHESS})$ otherwise, where $\epsilon $ is the machine precision

    • VSING: 1E–8 if you do not specify the SINGHESS= option and the value of SINGHESS otherwise

    Note that, in many cases, a normalized matrix $\mb{D}^{-1}\mb{AD}^{-1}$ is decomposed, and the singularity criteria are modified correspondingly.

  2. If the matrix $\mb{H}$ is found to be singular in the first step, a generalized inverse is computed. Depending on the G4= option, either a generalized inverse satisfying all four Moore-Penrose conditions is computed (a $g_4$-inverse) or a generalized inverse satisfying only two Moore-Penrose conditions is computed (a $g_2$-inverse, Pringle and Rayner 1971). If the number of parameters n of the application is less than or equal to G4= i, a $g_4$-inverse is computed; otherwise, only a $g_2$-inverse is computed. The $g_4$-inverse is computed by the (computationally very expensive but numerically stable) eigenvalue decomposition, and the $g_2$-inverse is computed by Gauss transformation. The $g_4$-inverse is computed using the eigenvalue decomposition $\mb{A} = \mb{Z} \bLambda \mb{Z}^\prime $, where $\mb{Z}$ is the orthogonal matrix of eigenvectors and $\bLambda $ is the diagonal matrix of eigenvalues, $\bLambda = \mr{diag}(\lambda _1,\ldots ,\lambda _ n)$. The $g_4$-inverse of $\mb{H}$ is set to

    \[ \mb{A}^- = \mb{Z} \bLambda ^- \mb{Z}^\prime \]

    where the diagonal matrix $\bLambda ^- = \mr{diag}(\lambda ^-_1,\ldots ,\lambda ^-_ n)$ is defined using the COVSING= option:

    \[ \lambda ^-_ i = \left\{ \begin{array}{ll} 1 / \lambda _ i & \mr{if} \, \, |\lambda _ i| > \mr{COVSING} \\ 0 & \mr{if} \, \, |\lambda _ i| \le \mr{COVSING} \end{array} \right. \]

    If you do not specify the COVSING= option, the nr smallest eigenvalues are set to zero, where nr is the number of rank deficiencies found in the first step.

For optimization techniques that do not use second-order derivatives, the covariance matrix is computed using finite-difference approximations of the derivatives.