The HPPLS Procedure

Reduced Rank Regression

As discussed in the preceding sections, partial least squares depends on selecting factors $\mb{t}=\mb{X}\mb{w}$ of the predictors and $\mb{u}=\mb{Y}\mb{q}$ of the responses that have maximum covariance, whereas principal components regression effectively ignores $\mb{u}$ and selects $\mb{t}$ to have maximum variance, subject to orthogonality constraints. In contrast, reduced rank regression selects $\mb{u}$ to account for as much variation in the predicted responses as possible, effectively ignoring the predictors for the purposes of factor extraction. In reduced rank regression, the Y-weights $\mb{q}_ i$ are the eigenvectors of the covariance matrix $\hat{\mb{Y}}_\mr {LS}’\hat{\mb{Y}}_\mr {LS}$ of the responses that are predicted by ordinary least squares regression, and the X-scores are the projections of the Y-scores $\mb{Y}\mb{q}_ i$ onto the X space.