# The GAMPL Procedure

### Thin-Plate Regression Splines

Subsections:

The GAMPL procedure uses thin-plate regression splines (Wood 2003) to construct spline basis expansions. The thin-plate regression splines are based on thin-plate smoothing splines (Duchon 1976, 1977). Compared to thin-plate smoothing splines, thin-plate regression splines produce fewer basis expansions and thus make direct fitting of generalized additive models possible.

#### Thin-Plate Smoothing Splines

Consider the problem of estimating a smoothing function f of with d covariates from n observations. The model assumes

Then the thin-plate smoothing splines estimate the smoothing function f by minimizing the penalized least squares function:

The penalty term includes the function that measures roughness on the f estimate:

The parameter m (which corresponds to the M= option for a spline effect) specifies how the penalty is applied to the function roughness. Function derivatives whose order is less than m are not penalized. The relation must be satisfied.

The penalty term also includes the smoothing parameter , which controls the trade-off between the model’s fidelity to the data and the function smoothness of f. When , the function estimate corresponds to an interpolation. When , the function estimate becomes the least squares fit. By using the defined penalized least squares criterion and a fixed value, you can explicitly express the estimate of the smooth function f in the following form:

In the expression of , and are coefficients to be estimated. The functions correspond to unpenalized polynomials of with degrees up to . The total number of these polynomials is . The function models the extra nonlinearity besides the polynomials and is a function of the Euclidean distance r between any value and an observed value:

Define the penalty matrix such that each entry , let be the vector of the response, let be the matrix where each row is formed by , and let and be vectors of coefficients and . Then you can obtain the function estimate from the following minimization problem:

#### Low-Rank Approximation

Given the representation of the thin-plate smoothing spline, the estimate of f involves as many parameters as the number of unique data points. Solving with an optimum becomes difficult for large problems.

Because the matrix is symmetric and nonnegative definite, the eigendecomposition can be taken as , where is the diagonal matrix of eigenvalues of , and is the matrix of eigenvectors that corresponds to . The truncated eigendecomposition forms , which is an approximation to such that

where is a diagonal matrix that contains the k most extreme eigenvalues in descending order of absolute values: . is the matrix that is formed by columns of eigenvectors that correspond to the eigenvalues in .

The approximation not only reduces the dimension from of to but also is optimal in two senses. First, minimizes the spectral norm between and all rank k matrices . Second, also minimizes the worst possible change that is introduced by the eigenspace truncation as defined by

where is formed by any k eigenvalues and corresponding eigenvectors. For more information, see Wood (2003).

Now given and , and letting , the minimization problem becomes

You can turn the constrained optimization problem into an unconstrained one by using any orthogonal column basis . One way to form is via the QR decomposition of :

Let . Then it is verified that

So for such that , it is true that . Now the problem becomes the unconstrained optimization,

Let

The optimization is simplified as