Previous Page | Next Page

Multivariate Analysis: Factor Analysis

Multivariate Analysis: Factor Analysis



Like principal component analysis, common factor analysis is a technique for reducing the complexity of high-dimensional data. (For brevity, this chapter refers to common factor analysis as simply "factor analysis.") However, the techniques differ in how they construct a subspace of reduced dimensionality. Jackson (1981,1991) provides an excellent comparison of the two methods.

Principal component analysis chooses a coordinate system for the vector space spanned by the variables. (Recall that the span of a set of vectors is the vector space consisting of all linear combinations of the vectors.) The first principal component points in the direction of maximum variation in the data. Subsequent components account for as much of the remaining variation as possible while being orthogonal to all of the previous principal components. Each principal component is a linear combination of the original variables. Dimensional reduction is achieved by ignoring dimensions that do not explain much variation.

While principal component analysis explains variability, factor analysis explains correlation. Suppose two variables, {x}_1 and {x}_2, are correlated, but not collinear. Factor analysis assumes the existence of an unobserved variable that is linearly related to {x}_1 and {x}_2, and explains the correlation between them. The goal of factor analysis is to estimate this unobserved variable from the structure of the original variables. An estimate of the unobserved variable is called a common factor.

The geometry of the relationship between the original variables and the common factor is illustrated in Figure 27.1. (The figure is based on a similar figure in Wickens (1995), as is the following description of the geometry.) The correlated variables {x}_1 and {x}_2 are shown schematically in the figure. Each vector is decomposed into a linear combination of a common factor and a unique factor. That is, {x}_i = c_i {f}+ d_i {u}_i, i=1,2. The unique factors, {u}_1 and {u}_2, are uncorrelated with the common factor, {f}, and with each other. Note that {f}, {u}_1, and {u}_2 are mutually orthogonal in the figure.

ugmultfactorgeometry.png (6624 bytes)

Figure 27.1: The Geometry of Factor Analysis

In contrast to principal components, a factor is not, in general, a linear combination of the original variables. Furthermore, a principal component analysis depends only on the data, whereas a factor analysis requires fitting the theoretical structure in the previous paragraph to the observed data.

If there are p variables and you postulate the existence of m common factors, then each variable is represented as a linear combination of the m common factors and a single unique factor. Since the unique factors are uncorrelated with the common factors and with each other, factor analysis requires m+p dimensions. (Figure 27.1 illustrates the case p=2 and m=1.) However, the orthogonality of the unique factors means that the geometry is readily understood by projecting the original variables onto the span of the m factors (called the factor space). A graph of this projection is called a pattern plot. In Figure 27.1, the pattern plot is the two points on {f} obtained by projecting {x}_1 and {x}_2 onto {f}.

The length of the projection of an original variable {x} onto the factor space indicates the proportion of the variability of {x} that is shared with the other variables. This proportion is called the communality. Consequently, the variance of each original variable is the sum of the common variance (represented by the communality) and the variance of the unique factor for that variable. In a pattern plot, the communality is the squared distance from the origin to a point.

In factor analysis, the common factors are not unique. Typically an initial orthonormal set of common factors is computed, but then these factors are rotated so that the factors are more easily interpreted in terms of the original variables. An orthogonal rotation preserves the orthonormality of the factors; an oblique transformation introduces correlations among one or more factors.

You can run the Factor analysis in Stat Studio by selecting Analysis \blacktriangleright\,Multivariate Analysis \blacktriangleright\,Factor Analysis from the main menu. The analysis is implemented by calling the FACTOR procedure in SAS/STAT. See the FACTOR procedure documentation in the SAS/STAT User's Guide for additional details.

The FACTOR procedure provides several methods of estimating the common factors and the communalities. Since an (m+p)-dimensional model is fit by using the original p variables, you should interpret the results with caution. The following list describes special issues that can occur:

These and other issues are described in the section "Heywood Cases and Other Anomalies about Communality Estimates" in the documentation for the FACTOR procedure.

You can use many different methods to perform a factor analysis. Two popular methods are the principal factor method and the maximum likelihood method. The principal factor method is computationally efficient and has similarities to principal component analysis. The maximum likelihood (ML) method is an iterative method that is computationally more demanding and is prone to Heywood cases, nonconvergence, and multiple optimal solutions. However, the ML method also provides statistics such as standard errors and confidence limits that help you to assess how well the model fits the data, and to interpret factors. Consequently, the ML method is often favored by statisticians.

In addition to these various methods of factor analysis, you can use Stat Studio to compute various component analyses: principal component analysis, Harris component analysis, and image component analysis.


Example

Specifying the Factor Analysis

Analysis of Selected Variables

References

Previous Page | Next Page | Top of Page