Introduction to Regression Procedures


Nonlinear Regression: The NLIN and NLMIXED Procedures

Recall from ChapterĀ 3: Introduction to Statistical Modeling with SAS/STAT Software, that a nonlinear regression model is a statistical model in which the mean function depends on the model parameters in a nonlinear function. The SAS/STAT procedures that can fit general, nonlinear models are the NLIN and NLMIXED procedures. These procedures have the following similarities:

  • Nonlinear models are fit by iterative methods.

  • You must provide an expression for the model by using SAS programming statements.

  • Analytic derivatives of the objective function with respect to the parameters are calculated automatically.

  • A grid search is available to select the best starting values for the parameters from a set of starting points that you provide.

The procedures have the following differences:

  • PROC NLIN estimates parameters by using nonlinear least squares; PROC NLMIXED estimates parameters by using maximum likelihood.

  • PROC NLMIXED enables you to construct nonlinear models that contain normally distributed random effects.

  • PROC NLIN requires that you declare all model parameters in the PARAMETERS statement and assign starting values. PROC NLMIXED determines the parameters in your model from the PARAMETER statement and the other modeling statements. It is not necessary to supply starting values for all parameters in PROC NLMIXED, but it is highly recommended.

  • The residual variance is not a parameter in models that are fit by using PROC NLIN, but it is a parameter in models that are fit by using PROC NLMIXED.

  • The default iterative optimization method in PROC NLIN is the Gauss-Newton method; the default method in PROC NLMIXED is the quasi-Newton method. Other optimization techniques are available in both procedures.

Nonlinear models are fit by using iterative techniques that begin from starting values and attempt to iteratively improve on the estimates. There is no guarantee that the solution that is achieved when the iterative algorithm converges will correspond to a global optimum.