The NLIN Procedure

Computational Methods

Nonlinear Least Squares

Recall the basic notation for the nonlinear regression model from the section Notation for Nonlinear Regression Models. The parameter vector $\bbeta $ belongs to $\bOmega $, a subset of $R^ p$. Two points of this set are of particular interest: the true value $\widetilde{\bbeta }$ and the least squares estimate $\widehat{\bbeta }$. The general nonlinear model fit with the NLIN procedure is represented by the equation

\[  \mb {Y} = \mb {f}(\widetilde{\beta _0}, \widetilde{\beta _1}, \ldots , \widetilde{\beta _ p}; \mb {z}_1,\mb {z}_2,\ldots ,\mb {z}_ k) + \bepsilon = \mb {f}(\widetilde{\bbeta }; \mb {z}_1,\mb {z}_2,\ldots ,\mb {z}_ k) + \bepsilon  \]

where $\mb {z}_ j$ denotes the $(n \times 1)$ vector of the jth regressor (independent) variable, $\widetilde{\bbeta }$ is the true value of the parameter vector, and $\bepsilon $ is the $(n \times 1)$ vector of homoscedastic and uncorrelated model errors with zero mean.

To write the model for the ith observation, the ith elements of $\mb {z}_1,\cdots ,\mb {z}_ k$ are collected in the row vector $\mb {z}_{i}^\prime $, and the model equation becomes

\[  Y_ i = f(\bbeta ; \mb {z}_{i}^\prime ) + \epsilon _ i  \]

The shorthand $f_ i(\bbeta )$ will also be used throughout to denote the mean of the ith observation.

For any given value $\bbeta $ we can compute the residual sum of squares

$\displaystyle  \mr {SSE}(\bbeta )  $
$\displaystyle = \sum _{i=1}^{n} \left(y_ i - f\left(\bbeta ;\mb {z}_{i}^\prime \right)\right)^2  $
$\displaystyle  $
$\displaystyle = \sum _{i=1}^{n} \left(y_ i - f_ i\left(\bbeta \right)\right)^2 = \mb {r}(\bbeta )^\prime \mb {r}(\bbeta )  $

The aim of nonlinear least squares estimation is to find the value $\widehat{\bbeta }$ that minimizes $\mr {SSE}(\bbeta )$. Because $\mb {f}$ is a nonlinear function of $\bbeta $, a closed-form solution does not exist for this minimization problem. An iterative process is used instead. The iterative techniques that PROC NLIN uses are similar to a series of linear regressions involving the matrix $\mb {X}$ and the residual vector $\mb {r}=\mb {y} - \mb {f}(\bbeta )$, evaluated at the current values of $\bbeta $.

It is more insightful, however, to describe the algorithms in terms of their approach to minimizing the residual sum of squares and in terms of their updating formulas. If $\widehat{\bbeta }^{(u)}$ denotes the value of the parameter estimates at the uth iteration, and $\widehat{\bbeta }^{(0)}$ are your starting values, then the NLIN procedure attempts to find values k and $\bDelta $ such that

\[  \widehat{\bbeta }^{(u+1)} = \widehat{\bbeta }^{(u)} + k\bDelta  \]

and

\[  \mr {SSE}\left(\widehat{\bbeta }^{(u+1)}\right) < \mr {SSE}\left(\widehat{\bbeta }^{(u)}\right)  \]

The various methods to fit a nonlinear regression model—which you can select with the METHOD= option in the PROC NLIN statement—differ in the calculation of the update vector $\bDelta $.

The gradient and Hessian of the residual sum of squares with respect to individual parameters and pairs of parameters are, respectively,

$\displaystyle  \mb {g}(\beta _ j)  $
$\displaystyle = \frac{\partial \mr {SSE}(\bbeta )}{\partial \beta _ j} = -2 \sum _{i=1}^{n} \left(y_ i - f_ i(\bbeta )\right) \frac{\partial f_ i(\bbeta )}{\partial \beta _ j}  $
$\displaystyle \left[\mb {H}(\bbeta )\right]_{jk}  $
$\displaystyle = \frac{\partial \mr {SSE}(\bbeta )}{\partial \beta _ j \partial \beta _ k} = 2 \sum _{i=1}^{n} \frac{\partial f_ i(\bbeta )}{\partial \beta _ j} \frac{\partial f_ i(\bbeta )}{\partial \beta _ k} - \left(y_ i - f_ i(\bbeta )\right) \frac{\partial ^2 f_ i(\bbeta )}{\partial \beta _ j\partial \beta _ k}  $

Denote as $\mb {H}^*_ i(\bbeta )$ the Hessian matrix of the mean function,

\[  \left[\mb {H}^*_ i(\bbeta )\right]_{jk} = \left[ \frac{\partial ^2 f_ i(\bbeta )}{\partial \beta _ j\partial \beta _ k} \right]_{jk}  \]

Collecting the derivatives across all parameters leads to the expressions

$\displaystyle  \mb {g}(\bbeta )  $
$\displaystyle = \frac{\partial \mr {SSE}(\bbeta )}{\partial \bbeta } = -2 \mb {X}^\prime \mb {r}(\bbeta )  $
$\displaystyle \mb {H}(\bbeta )  $
$\displaystyle = \frac{\partial ^2\mr {SSE}(\bbeta )}{\partial \bbeta \partial \bbeta ^{\prime }} = 2 \left( \mb {X}^\prime \mb {X} - \sum _{i=1}^ n r_ i(\bbeta ) \mb {H}^*_ i(\bbeta ) \right)  $

The change in the vector of parameter estimates is computed as follows, depending on the estimation method:

$\displaystyle  \mbox{Gauss-Newton: } \bDelta  $
$\displaystyle = \left(-\mr {E}[\mb {H}(\bbeta )]\right)^{-}\mb {g}(\bbeta ) = (\mb {X}’\mb {X})^{-}\mb {X}’\mb {r}  $
$\displaystyle  \mbox{Marquardt: } \bDelta  $
$\displaystyle = \left(\mb {X}’\mb {X} + \lambda \mbox{diag}(\mb {X}’\mb {X})\right)^{-}\mb {X}’\mb {r}  $
$\displaystyle  \mbox{Newton: } \bDelta  $
$\displaystyle = - \mb {H}(\bbeta )^{-}\mb {g}(\bbeta ) = \mb {H}(\bbeta )^- \mb {X}’\mb {r}  $
$\displaystyle  \mbox{Steepest descent: } \bDelta  $
$\displaystyle = -\frac12 \frac{\partial \mr {SSE}(\bbeta )}{\partial \bbeta } = \mb {X}’\mb {r} $

The Gauss-Newton and Marquardt iterative methods regress the residuals onto the partial derivatives of the model with respect to the parameters until the estimates converge. You can view the Marquardt algorithm as a Gauss-Newton algorithm with a ridging penalty. The Newton iterative method regresses the residuals onto a function of the first and second derivatives of the model with respect to the parameters until the estimates converge. Analytical first- and second-order derivatives are automatically computed as needed.

The default method used to compute $(\mb {X}^{\prime }\mb {X})^{-}$ is the sweep (Goodnight, 1979). It produces a reflexive generalized inverse (a $g_2$-inverse, Pringle and Rayner 1971). In some cases it might be preferable to use a Moore-Penrose inverse (a $g_4$-inverse) instead. If you specify the G4 option in the PROC NLIN statement, a $g_4$-inverse is used to calculate $\bDelta $ on each iteration.

The four algorithms are now described in greater detail.

Algorithmic Details

Gauss-Newton and Newton Methods

From the preceding set of equations you can see that the Marquardt method is a ridged version of the Gauss-Newton method. If the ridge parameter $\lambda $ equals zero, the Marquardt step is identical to the Gauss-Newton step. An important difference between the Newton methods and the Gauss-Newton-type algorithms lies in the use of second derivatives. To motivate this distinctive element between Gauss-Newton and the Newton method, focus first on the objective function in nonlinear least squares. To numerically find the minimum of

\[  \mr {SSE}(\bbeta ) = \mb {r}(\bbeta )^\prime \mb {r}(\bbeta )  \]

you can approach the problem by approximating the sum of squares criterion by a criterion for which you can compute a closed-form solution. Following Seber and Wild (1989, Sect. 2.1.3), we can achieve that by doing the following:

  • approximating the model and substituting the approximation into $\mr {SSE}(\bbeta )$

  • approximating $\mr {SSE}(\bbeta )$ directly

The first method, approximating the nonlinear model with a first-order Taylor series, is the purview of the Gauss-Newton method. Approximating the residual sum of squares directly is the realm of the Newton method.

The first-order Taylor series of the residual $\mb {r}(\bbeta )$ at the point $\widehat{\bbeta }$ is

\[  \mb {r}(\bbeta ) \approx \mb {r}(\widehat{\bbeta }) - \widehat{\mb {X}} \left(\bbeta - \widehat{\bbeta }\right)  \]

Substitution into $\mr {SSE}(\bbeta )$ leads to the objective function for the Gauss-Newton step:

\[  \mr {SSE}\left(\bbeta \right) \approx S_ G(\bbeta ) = \mb {r}(\widehat{\bbeta })^\prime - 2\mb {r}^\prime (\widehat{\bbeta })\widehat{\mb {X}} \left(\bbeta - \widehat{\bbeta }\right) + \left(\bbeta - \widehat{\bbeta }\right)^\prime \left(\widehat{\mb {X}}^\prime \widehat{\mb {X}}\right) \left(\bbeta - \widehat{\bbeta }\right)  \]

Hat notation is used here to indicate that the quantity in question is evaluated at $\widehat{\bbeta }$.

To motivate the Newton method, take a second-order Taylor series of $\mr {SSE}(\bbeta )$ around the value $\widehat{\bbeta }$:

\[  \mr {SSE}\left(\bbeta \right) \approx S_ N(\bbeta ) = \mr {SSE}\left(\widehat{\bbeta }\right) + \mb {g}(\widehat{\bbeta })^\prime \left(\bbeta - \widehat{\bbeta }\right) +\frac12 \left(\bbeta - \widehat{\bbeta }\right)^\prime \mb {H}(\widehat{\bbeta }) \left(\bbeta - \widehat{\bbeta }\right)  \]

Both $S_ G(\bbeta )$ and $S_ N(\bbeta )$ are quadratic functions in $\bbeta $ and are easily minimized. The minima occur when

$\displaystyle  \mbox{Gauss-Newton: } \bbeta - \widehat{\bbeta }  $
$\displaystyle = \left(\widehat{\mb {X}}^\prime \widehat{\mb {X}}\right)^{-} \widehat{\mb {X}}^\prime \mb {r}(\widehat{\bbeta })  $
$\displaystyle \mbox{Newton : } \bbeta - \widehat{\bbeta }  $
$\displaystyle = - \mb {H}(\widehat{\bbeta })^{-} \mb {g}(\widehat{\bbeta })  $

and these terms define the preceding $\bDelta $ update vectors.

Gauss-Newton Method

Since the Gauss-Newton method is based on an approximation of the model, you can also derive the update vector by first considering the normal equations of the nonlinear model

\[  \mb {X}^\prime \mb {f}(\bbeta ) = \mb {X}^\prime \mb {Y}  \]

and then substituting the Taylor approximation

\[  \mb {f}(\bbeta ) \approx \mb {f}(\widehat{\bbeta }) + \mb {X} \left(\bbeta - \widehat{\bbeta }\right)  \]

for $\mb {f}(\bbeta )$. This leads to

$\displaystyle  \mb {X}^{\prime }\left(\mb {f}(\widehat{\bbeta }) + \mb {X}(\bbeta - \widehat{\bbeta })\right)  $
$\displaystyle = \mb {X}^{\prime }\mb {Y}  $
$\displaystyle  (\mb {X}^{\prime }\mb {X})(\bbeta - \widehat{\bbeta })  $
$\displaystyle = \mb {X}^{\prime }\mb {Y} - \mb {X}^{\prime }\mb {f}(\widehat{\bbeta })  $
$\displaystyle  (\mb {X}^{\prime }\mb {X}) \bDelta  $
$\displaystyle = \mb {X}^{\prime }\mb {r}(\widehat{\bbeta })  $

and the update vector becomes

\[  \bDelta = \left(\mb {X}^\prime \mb {X}\right)^{-} \mb {X}^{\prime }\mb {r}(\widehat{\bbeta })  \]

Note: If $\mb {X}^{\prime }\mb {X}$ is singular or becomes singular, PROC NLIN computes $\bDelta $ by using a generalized inverse for the iterations after singularity occurs. If $\mb {X}^{\prime }\mb {X}$ is still singular for the last iteration, the solution should be examined.

Newton Method

The Newton method uses the second derivatives and solves the equation

\[  \bDelta = \mb {H}^{-} \mb {X}^{\prime }\mb {r}  \]

If the automatic variables _WEIGHT_, _WGTJPJ_, and _RESID_ are used, then

\[  \bDelta = \mb {H}^{-}\mb {X}^{\prime } \mb {W}^{\mr {SSE}} \mb {r}^*  \]

is the direction, where

\[  \mb {H} = \mb {X}^{\prime } \mb {W}^{\mr {XPX}} \mb {X} - \sum _{i=1}^ n \mb {H}^*_ i(\bbeta ) w_ i^{\mr {XPX}} r^*_ i  \]

and

$\mb {W}^{\mr {SSE}}$

is an $n \times n$ diagonal matrix with elements $w_ i^{\mr {SSE}}$ of weights from the _WEIGHT_ variable. Each element $w_ i^{\mr {SSE}}$ contains the value of _WEIGHT_ for the ith observation.

$\mb {W}^{\mr {XPX}}$

is an $n \times n$ diagonal matrix with elements $w_ i^{\mr {XPX}}$ of weights from the _WGTJPJ_ variable.

Each element $w_ i^{\mr {XPX}}$ contains the value of _WGTJPJ_ for the ith observation.

$\mb {r}^*$

is a vector with elements $r^*_ i$ from the _RESID_ variable. Each element $r^*_ i$ contains the value of _RESID_ evaluated for the ith observation.

Marquardt Method

The updating formula for the Marquardt method is as follows:

\[  \bDelta = (\mb {X}^{\prime }\mb {X} + \lambda \mbox{diag}(\mb {X}^{\prime }\mb {X}))^{-}\mb {X}^{\prime }\mb {e}  \]

The Marquardt method is a compromise between the Gauss-Newton and steepest descent methods (Marquardt, 1963). As $\lambda \rightarrow 0$, the direction approaches Gauss-Newton. As $\lambda \rightarrow \infty $, the direction approaches steepest descent.

Marquardt’s studies indicate that the average angle between Gauss-Newton and steepest descent directions is about $90^{\circ }$. A choice of $\lambda $ between 0 and infinity produces a compromise direction.

By default, PROC NLIN chooses $\lambda =10^{-7}$ to start and computes $\bDelta $. If $\mr {SSE}(\bbeta _0+\bDelta ) < \mr {SSE}(\bbeta _0)$, then $\lambda =\lambda /10$ for the next iteration. Each time $\mr {SSE}(\bbeta _0+\bDelta ) > \mr {SSE}(\bbeta _0)$, then $\lambda =10 \lambda $.

Note: If the SSE decreases on each iteration, then $\lambda \rightarrow 0$, and you are essentially using the Gauss-Newton method. If SSE does not improve, then $\lambda $ is increased until you are moving in the steepest descent direction.

Marquardt’s method is equivalent to performing a series of ridge regressions, and it is useful when the parameter estimates are highly correlated or the objective function is not well approximated by a quadratic.

Steepest Descent (Gradient) Method

The steepest descent method is based directly on the gradient of $0.5\mb {r}(\bbeta )^{\prime }\mb {r}(\bbeta )$:

\[  \frac{1}{2} \frac{\partial \mr {SSE}(\bbeta )}{\partial \bbeta } = -\mb {X}^{\prime }\mb {r}  \]

The quantity $-\mb {X}^{\prime }\mb {r}$ is the gradient along which $\bepsilon ^{\prime }\bepsilon $ increases. Thus $\bDelta =\mb {X}^{\prime }\mb {r}$ is the direction of steepest descent.

If the automatic variables _WEIGHT_ and _RESID_ are used, then

\[  \bDelta = \mb {X}^{\prime } \mb {W}^{\mr {SSE}} \mb {r}^*  \]

is the direction, where

$\mb {W}^{\mr {SSE}}$

is an $n \times n$ diagonal matrix with elements $w_ i^{\mr {SSE}}$ of weights from the _WEIGHT_ variable. Each element $w_ i^{\mr {SSE}}$ contains the value of _WEIGHT_ for the ith observation.

$\mb {r}^*$

is a vector with elements $r_ i^*$ from _RESID_. Each element $r_ i^*$ contains the value of _RESID_ evaluated for the ith observation.

Using the method of steepest descent, let

\[  \bbeta ^{(k+1)} = \bbeta ^{(k)} + \alpha \bDelta  \]

where the scalar $\alpha $ is chosen such that

\[  \mbox{SSE}(\bbeta _ i + \alpha \bDelta ) < \mbox{SSE}(\bbeta _ i)  \]

Note: The steepest-descent method can converge very slowly and is therefore not generally recommended. It is sometimes useful when the initial values are poor.

Step-Size Search

The default method of finding the step size k is step halving by using SMETHOD=HALVE. If $\mr {SSE}(\bbeta ^{(u)}+\bDelta ) > \mr {SSE}(\bbeta ^{(u)})$, compute $\mr {SSE}(\bbeta ^{(u)}+0.5\bDelta )$, $\mr {SSE}(\bbeta ^{(u)}+0.25\bDelta ),\ldots ,$ until a smaller SSE is found.

If you specify SMETHOD=GOLDEN, the step size k is determined by a golden section search. The parameter TAU determines the length of the initial interval to be searched, with the interval having length TAU (or $2 \times \mr {TAU}$), depending on $\mr {SSE}(\bbeta ^{(u)}+\bDelta )$. The RHO parameter specifies how fine the search is to be. The SSE at each endpoint of the interval is evaluated, and a new subinterval is chosen. The size of the interval is reduced until its length is less than RHO. One pass through the data is required each time the interval is reduced. Hence, if RHO is very small relative to TAU, a large amount of time can be spent determining a step size. For more information about the golden section search, see Kennedy and Gentle (1980).

If you specify SMETHOD=CUBIC, the NLIN procedure performs a cubic interpolation to estimate the step size. If the estimated step size does not result in a decrease in SSE, step halving is used.