Nonlinear Regression
The SAS/STAT nonlinear regression procedures include the following:
NLIN Procedure
The NLIN procedure fits nonlinear regression models and estimates the parameters by nonlinear least squares or weighted nonlinear
least squares. You specify the model with programming statements. This gives you great flexibility in modeling the relationship
between the response variable and independent (regressor) variables. It does, however, require additional coding compared to model
specifications in linear modeling procedures such as the REG, GLM, and MIXED procedures. The following are highlights of the NLIN
procedure's features:
 provides a highquality automatic differentiator so that you do not need to specify first and second derivatives. You can, however, specify the derivatives if you wish.
 solves the nonlinear least squares problem by one of the following four algorithms (methods):
 steepestdescent or gradient method
 Newton method
 modified GaussNewton method
 Marquardt method
 enables you to confine the estimation procedure to a certain range of values of the parameters by imposing bounds on the estimates
 computes Hougaard's measure of skewness

 provides bootstrap estimates of confidence intervals for parameters and the covariance/correlation matrices of the parameter estimates
 performs weighted estimation
 creates an output data set that contains statistics that are calculated for each observation
 creates a data set that contains the parameter estimates at each iteration
 performs BY group processing, which enables you to obtain separate analyses on grouped observations
 creates a SAS data set that corresponds to any output table
 automatically created graphs by using ODS Graphics

For further details, see
NLIN Procedure
TRANSREG Procedure
The TRANSREG (transformation regression) procedure fits linear models, optionally with smooth, spline, BoxCox, and other nonlinear
transformations of the variables. The following are highlights of the TRANSREG procedure's features:
 enables you to fit linear models including:
 ordinary regression and ANOVA
 metric and nonmetric conjoint analysis (Green and Wind 1975; de Leeuw, Young, and Takane 1976)
 linear models with BoxCox (1964) transformations of the dependent variables
 regression with a smooth (Reinsch 1967), spline (de Boor 1978; van Rijckevorsel 1982),
monotone spline (Winsberg and Ramsay 1980), or penalized Bspline (Eilers and Marx 1996)
fit function
 metric and nonmetric vector and ideal point preference mapping (Carroll 1972)
 simple, multiple, and multivariate regression with variable transformations (Young,
de Leeuw, and Takane 1976; Winsberg and Ramsay 1980; Breiman and Friedman 1985)
 redundancy analysis (Stewart and Love 1968) with variable transformations (Israels 1984)
 canonical correlation analysis with variable transformations (van der Burg and de Leeuw 1983)
 response surface regression (Meyers 1976; Khuri and Cornell 1987) with variable transformations
 enables you to use a data set that can contain variables measured on nominal, ordinal, interval, and ratio scales;
you can specify any mix of these variable types for the dependent and independent variables
 transform nominal variables by scoring the categories to minimize squared error
(Fisher 1938), or treat nominal variables as classification variables

 enables you to transform ordinal variables by monotonically scoring the ordered categories so that order is
weakly preserved (adjacent categories can be merged) and squared error is minimized. Ties
can be optimally untied or left tied (Kruskal 1964). Ordinal variables can also be transformed
to ranks.
 enables you to transform interval and ratio scale of measurement variables linearly or nonlinearly with spline
(de Boor 1978; van Rijckevorsel 1982), monotone spline (Winsberg and Ramsay 1980),
penalized Bspline (Eilers and Marx 1996), smooth (Reinsch 1967), or BoxCox (Box and
Cox 1964) transformations. In addition, logarithmic, exponential, power, logit, and inverse
trigonometric sine transformations are available.
 fits a curve through a scatter plot or fit multiple curves, one for each level of a classification variable
 enables you to constrain the functions to be parallel or monotone or have the same intercept
 enables you to code experimental designs and classification variables prior to their use in other analyses
 perform sweighted estimation
 generates output data sets including
 ANOVA results
 regression tables
 conjoint analysis partworth utilities
 coefficients
 marginal means
 original and transformed variables, predicted values, residuals, scores, and more
 performs BY group processing, which enables you to obtain separate analyses on grouped observations
 automatically creates graphs by using ODS Graphics

For further details, see
TRANSREG Procedure