The MVPMODEL Procedure

Principal Component Analysis

Principal component analysis was originated by Pearson (1901) and later developed by Hotelling (1933). The application of principal components is discussed by Rao (1964), Cooley and Lohnes (1971), Gnanadesikan (1977), and Jackson (1991). Excellent statistical treatments of principal components are found in Kshirsagar (1972), Morrison (1976), and Mardia, Kent, and Bibby (1979).

Principal component modeling focuses on the number of components used. The analysis begins with an eigenvalue decomposition of the sample covariance matrix, $\bS $,

\[ \mb{S} = \frac{1}{n-1} \sum _{i=1}^ n \left(\mb{X}_ i - \bar{\mb{X}}_ n\right) \left(\mb{X}_ i - \bar{\mb{X}}_ n\right)^{\prime } \]

as

\[ \begin{array}{rl} \bS & = \bP \bL \bP ^{\prime } \\ \bP ^{\prime } \bS \bP & = \bL \end{array} \]

where $\bL $ is a diagonal matrix and $\bP $ is an orthogonal matrix (Jackson 1991; Mardia, Kent, and Bibby 1979). The columns of $\bP $ are the eigenvectors, and the diagonal elements of $\bL $ are the eigenvalues. The eigenvectors are customarily scaled so that they have unit length.

A principal component, $\mb{t}_ i$, is a linear combination of the original variables. The coefficients are the eigenvectors of the covariance matrix. The principal component scores for the ith observation are computed as

\[ \mb{t}_ i= \bP ^{\prime } \left( \mb{x}_ i- \bar{ \mb{x} } \right) \]

The principal components are sorted by descending order of the eigenvalues, which are equal to the variances of the components.

The eigenvectors are the principal component loadings. The eigenvectors are orthogonal, so the principal components represent jointly perpendicular directions through the space of the original variables. The scores on the first j principal components have the highest possible generalized variance of any set of j unit-length linear combinations of the original variables.

The first j principal components provide a least squares solution to the model

\[ \mb{X} = \mb{TP}^{\prime } + \mb{E} \]

where $\mb{X}$ is an $n \times p$ matrix of the centered observed variables, $\mb{T}$ is the $n \times j$ matrix of scores on the first j principal components, $\mb{P}^{\prime }$ is the $j \times p$ matrix of eigenvectors, and $\mb{E}$ is an $n \times p$ matrix of residuals. The first j principal components are the vectors (rows of $\bP ^{\prime }$) that minimize trace$(\mb{E}^{\prime }\mb{E})$, the sum of all the squared elements in $\mb{E}$.

The first j principal components are the best linear predictors of the process variables among all possible sets of j variables, although any nonsingular linear transformation of the first j principal components provides equally good prediction. The same result is obtained by minimizing the determinant or the Euclidean norm of $\mb{E}^{\prime }\mb{E}$ rather than the trace.