The GAM Procedure

Selection of Smoothing Parameters

CV and GCV

The smoothers discussed here have a single smoothing parameter. In choosing the smoothing parameter, cross validation can be used. Cross validation works by leaving points $(x_ i, y_ i)$ out one at a time, estimating the squared residual for smooth function at $x_ i$ based on the remaining $n-1$ data points, and choosing the smoother to minimize the sum of those squared residuals. This mimics the use of training and test samples for prediction. The cross validation function is defined as

\[ \mr {CV}(\lambda ) = \frac{1}{n}\sum _{i=1}^ n \left(y_ i - \hat{\eta }_\lambda ^{(-i)}(x_ i)\right)^2  \]

where $\hat{\eta }_\lambda ^{(-i)}(x_ i)$ indicates the fit at $x_ i$, computed by leaving out the ith data point. The quantity $n\mr {CV}(\lambda )$ is sometimes called the prediction sum of squares, or PRESS (Allen, 1974).

All of the smoothers fit by the GAM procedure can be formulated as a linear combination of the sample responses

\[ \hat{\eta }(x) = \mb {A}(\lambda )\mb {y}  \]

for some matrix $\mb {A}(\lambda )$, which depends on $\lambda $. (The matrix $\mb {A}(\lambda )$ depends on x and the sample data as well, but this dependence is suppressed in the preceding equation.) Let $a_{ii}$ be the ith diagonal element of $\mb {A}(\lambda )$. Then the CV function can be expressed as

\[  \mr {CV}(\lambda ) = \frac{1}{n}\sum _{i=1}^ n \left(\frac{(y_ i - \hat{\eta }_\lambda (x_ i))}{1-a_{ii}}\right)^2  \]

In most cases, it is very time-consuming to compute the quantity $a_{ii}$ individually. To solve this computational problem, Wahba (1990) has proposed the generalized cross validation function (GCV) that can be used to solve a wide variety of problems involving selection of a parameter to minimize the prediction risk.

The GCV function is defined as

\[ \mr {GCV}(\lambda ) = \frac{n \sum _{i=1}^ n (y_ i - \hat{\eta }_\lambda (x_ i))^2}{(n-\mathrm{Trace}(\mb {A}(\lambda )))^2}  \]

The GCV formula simply replaces the $a_{ii}$ with $\mathrm{Trace}(\mb {A}(\lambda ))/n$. Therefore, it can be viewed as a weighted version of CV. In most of the cases of interest, GCV is closely related to CV but much easier to compute. Specify the METHOD=GCV option in the MODEL statement in order to use the GCV function to choose the smoothing parameters.

Degrees of Freedom

The estimated GAM model can be expressed as

\[  \hat\eta (X) = \hat{s}_0 + \sum _{j=1}^{p} \mb {A}_ j(\lambda _ j) Y  \]

Because the weights are calculated based on previous iteration during the local scoring iteration, the matrices $\mb {A}_ j$ might depend on Y for non-Gaussian data. However, for the final iteration, the $\mb {A}_ j$ matrix for the spline smoothers has the same role as the projection matrix in linear regression; therefore, nonparametric degrees of freedom (DF) for the jth spline smoother can be defined as

\[  \mr {DF}(j\mr {th~ spline~ smoother}) = \mathrm{Trace}(\mb {A}_ j(\lambda _ j))  \]

For loess smoothers $\mb {A}_ j$ is not symmetric and so is not a projection matrix. In this case PROC GAM uses

\[  \mr {DF}(j\mr {th~ loess~ smoother}) = \mathrm{Trace}(\mb {A}_ j(\lambda _ j)’\mb {A}_ j(\lambda _ j))  \]

The GAM procedure gives you the option of specifying the degrees of freedom for each individual smoothing component. If you choose a particular value for the degrees of freedom, then during every local scoring iteration the procedure will search for a corresponding smoothing parameter lambda that yields the specified value or comes as close as possible. The final estimate for the smoother during this local scoring iteration will be based on this lambda. Note that for univariate spline and loess components, an additional degree of freedom is used by default to account for the linear portion of the model, so the value displayed in the Fit Summary and Analysis of Deviance tables will be one less than the value you specify.