The GLMPOWER Procedure

Contrasts in Fixed-Effect Multivariate Models

The multivariate model has the form

\[ \mb{Y} = \mb{X} \bbeta + \bepsilon \]

where $\mb{Y}$ is the N $\times $ p vector of responses, for p > 1; $\mb{X}$ is the N $\times $ k design matrix; $\bbeta $ is the k $\times $ p matrix of model parameters that correspond to the columns of $\mb{X}$ and $\mb{Y}$; and $\bepsilon $ is an N $\times $ p vector of errors, where

\[ \epsilon _1, \ldots , \epsilon _ N \sim \mr{N}(0,\bSigma ) \quad \mr{(iid)} \]

In PROC GLMPOWER, the model parameters $\bbeta $ are not specified directly, but rather indirectly as $\mb{Y^\star }$, which represents either conjectured response means or typical response values for each design profile. The $\mb{Y^\star }$ values are manifested as the collection of dependent variables in the MODEL statement. The matrix $\bbeta $ is obtained from $\mb{Y^\star }$ according to the least squares equation,

\[ \bbeta = (\mb{X}’\mb{X})^{-}\mb{X}’ \mb{Y^\star } \]

Note that, in general, there is not a one-to-one mapping between $\mb{Y^\star }$ and $\bbeta $. Many different scenarios for $\mb{Y^\star }$ might lead to the same $\bbeta $. If you specify $\mb{Y^\star }$ with the intention of representing cell means, keep in mind that PROC GLMPOWER allows scenarios that are not valid cell means according to the model that is specified in the MODEL statement. For example, if $\mb{Y^\star }$ exhibits an interaction effect but the corresponding interaction term is left out of the model, then the cell means ($\mb{X} \bbeta $) that are derived from $\bbeta $ differ from $\mb{Y^\star }$. In particular, the cell means that are derived in this way are the projection of $\mb{Y^\star }$ onto the model space.

It is convenient in power analysis to parameterize the design matrix $\mb{X}$ in three parts, $\{ \ddot{\mb{X}}, \mb{W}, N\} $, defined as follows:

  1. The q $\times $ k essence design matrix $\ddot{\mb{X}}$ is the collection of unique rows of $\mb{X}$. Its rows are sometimes referred to as "design profiles." Here, q $\le $ N is defined simply as the number of unique rows of $\mb{X}$.

  2. The q $\times $ 1 weight vector $\mb{w}$ reveals the relative proportions of design profiles, and $\mb{W} = \mr{diag}(\mb{w})$. Row i of $\ddot{\mb{X}}$ is to be included in the design $w_ i$ times for every $w_ j$ times that row j is included. The weights are assumed to be standardized (that is, they sum up to 1).

  3. The total sample size is N. This is the number of rows in $\mb{X}$. If you gather $N w_ i = n_ i$ copies of the ith row of $\ddot{\mb{X}}$, for i = $1,\ldots ,q$, then you end up with $\mb{X}$.

The preceding quantities are derived from PROC GLMPOWER syntax as follows:

  • Values for $\ddot{\mb{X}}$, $\mb{Y^\star }$, and $\mb{w}$ are specified in the exemplary data set (from using the DATA= option in the PROC GLMPOWER statement), and the corresponding variables are identified in the CLASS , MODEL , and WEIGHT statements.

  • N is specified in the NTOTAL= option in the POWER statement.

It is useful to express the crossproduct matrix $\mb{X}’\mb{X}$ in terms of these three parts,

\[ \mb{X}’\mb{X} = N \ddot{\mb{X}}’ \mb{W} \ddot{\mb{X}} \]

because this expression factors out the portion (N) that depends on sample size and the portion ($\ddot{\mb{X}}’ \mb{W} \ddot{\mb{X}}$) that depends only on the design structure.

A general linear hypothesis for the univariate model has the form

\begin{align*} H_0\colon & \mb{L} \bbeta \mb{M} = \btheta _0 \\ H_ A\colon & \mb{L} \bbeta \mb{M} \ne \btheta _0 \end{align*}

where $\mb{L}$ is an l $\times $ k between-subject contrast matrix with rank $r_ L$, $\mb{M}$ is a p $\times $ m within-subject contrast matrix with rank $r_ M$, and $\btheta _0$ is an l $\times $ m null contrast matrix (usually just a matrix of zeros).

Note that model effect tests are just between-subject contrasts that use special forms of $\mb{L}$, combined with an $\mb{M}$ that is the p $\times $ 1 mean transformation vector of the dependent variables (a vector of values all equal to $1/p$). Thus, this scheme covers both effect tests (which are specified in the MODEL statement and the EFFECTS= option in the POWER statement) and custom between-subject contrasts (which are specified in the CONTRAST statement).

The $\mb{M}$ matrix is often referred to as the dependent variable transformation and is specified in the MANOVA or REPEATED statement.

The model degrees of freedom $\mr{DF_ M}$ are equal to the rank of $\mb{X}$, denoted $r_ X$. The error degrees of freedom $\mr{DF_ E}$ are equal to N$r_ X$.

The hypothesis sum of squares $\mr{SS_ H}$ in the univariate model generalizes to the hypothesis SSCP matrix in the multivariate model,

\[ \mb{H} = \left( \mb{L} \hat{\bbeta } \mb{M} - \btheta _0 \right)’ \left( \mb{L} \left(\mb{X}’\mb{X}\right)^{-1} \mb{L}’ \right)^{-1} \left( \mb{L} \hat{\bbeta } \mb{M} - \btheta _0 \right) \]

The error sum of squares $\hat{\sigma }^2 (N - r_ X)$ in the univariate model generalizes to the error SSCP matrix in the multivariate model,

\[ \mb{E} = (N - r_ X) \mb{M}’ \hat{\bSigma } \mb{M} \]

where

\[ \hat{\bSigma } = \left( \mb{Y} - \mb{X} \hat{\bbeta } \right)’ \left( \mb{Y} - \mb{X} \hat{\bbeta } \right) / (N - r_ X) \]

and

\[ \hat{\bbeta } = (\mb{X}’\mb{X})^{-}\mb{X}’ \mb{Y} \]

The population counterpart of $\mb{H}/N$ is

\[ \mb{H^\star } = \left( \mb{L} \bbeta \mb{M} - \btheta _0 \right)’ \left( \mb{L} \left(\ddot{\mb{X}}’ \mb{W} \ddot{\mb{X}}\right)^{-} \mb{L}’ \right)^{-1} \left( \mb{L} \bbeta \mb{M} - \btheta _0 \right) \]

and the population counterpart of $\mb{E}/N$ is

\[ \mb{E}^\star = \mb{M}’ \bSigma \mb{M} \]

The elements of $\bSigma $ are specified in the MATRIX= and STDDEV= options and identified in the CORRMAT= , CORRS= , COVMAT= , and SQRTVAR= options in the POWER statement.

The power and sample size computations for all the tests that are supported in the MTEST= option in the POWER statement are based on $\mb{H^\star }$ and $\mb{E}^\star $. The following two subsections cover the computational methods and formulas for the multivariate and univariate tests that are supported in the MTEST= and UEPSDEF= options in the POWER statement.

Multivariate Tests

Power computations for multivariate tests are based on O’Brien and Shieh (1992) (for METHOD= OBRIENSHIEH) and Muller and Peterson (1984) (for METHOD= MULLERPETERSON).

Let s = $\min (r_ L, r_ M)$, the smaller of the between-subject and within-subject contrast degrees of freedom. Critical value computations assume that under $H_0$, the test statistic F is distributed as $F(r_ L r_ M, \nu _2)$, where $\nu _2 = (N - r_ X) - r_ M + 1$ if s = 1 but depends on the choice of test if s > 1. Power computations assume that under $H_ A$, F is distributed as $F(r_ L r_ M, \nu _2, \lambda )$, where the noncentrality $\lambda $ depends on $r_ L$, $r_ M$, the choice of test, and the power computation method.

Formulas for the test statistic F, denominator degrees of freedom $\nu _2$, and noncentrality $\lambda $ for all combinations of dimensions, tests, and methods are given in the following subsections.

The power in each case is computed as

\[ \mr{power} = P\left(F(r_ L r_ M, \nu _2, \lambda ) \ge F_{1-\alpha }(r_ L r_ M, \nu _2)\right) \]

Computed power is exact for some cases and approximate for others. Sample size is computed by inverting the power equation.

Let $\bDelta = {\mb{E}}^{-1} \mb{H}$, and define $\bphi $ as the $s \times 1$ vector of ordered positive eigenvalues of $\bDelta $, $\bphi = \{  \phi _1, \ldots , \phi _ s \} $, where $\phi _1 \ge \cdots \ge \phi _ s > 0$. The population equivalent is

\begin{equation*} \begin{split} \bDelta ^\star & = {\mb{E}^\star }^{-1} \mb{H^\star } \\ & = \left( \mb{M}’ \bSigma \mb{M} \right)^{-1} \left( \mb{L} \bbeta \mb{M} - \btheta _0 \right)’ \left( \mb{L} \left(\ddot{\mb{X}}’ \mb{W} \ddot{\mb{X}}\right)^{-1} \mb{L}’ \right)^{-1} \left( \mb{L} \bbeta \mb{M} - \btheta _0 \right) \end{split}\end{equation*}

where $\bphi ^\star $ is the s $\times $ 1 vector of ordered positive eigenvalues of $\bDelta ^\star $, $\bphi ^\star = \{  \phi _1^\star , \ldots , \phi _ s^\star \} $ for $\phi _1^\star \ge \cdots \ge \phi _ s^\star > 0$.

Case 1: s = 1

When s = 1, all three multivariate tests (MTEST= HLT, MTEST= PT, and MTEST= WILKS) are equivalent. The test statistic is F = $\phi _1 \nu _2 / (r_ L r_ M)$, where $\nu _2 = (N - r_ X) - r_ M + 1$.

When the dependent variable transformation has a single degree of freedom ($r_ M = 1$), METHOD= OBRIENSHIEH and METHOD= MULLERPETERSON are the same, computing exact power by using noncentrality $\lambda = N \bDelta ^\star $. The sample size must satisfy $N \ge r_ X + 1$.

When the dependent variable transformation has more than one degree of freedom but the between-subject contrast has a single degree of freedom ($r_ M > 1, r_ L = 1$), METHOD= OBRIENSHIEH computes exact power by using noncentrality $\lambda = N \phi _1^\star $, and METHOD= MULLERPETERSON computes approximate power by using

\[ \lambda = \frac{(N - r_ X) - r_ M + 1}{(N - r_ X)} N \phi _1^\star \]

The sample size must satisfy $N \ge r_ X + r_ M$.

Case 2: s > 1

When both the dependent variable transformation and the between-subject contrast have more than one degree of freedom (s > 1), METHOD= OBRIENSHIEH computes the noncentrality as $\lambda = N \lambda ^\star $, where $\lambda ^\star $ is the primary noncentrality. The form of $\lambda ^\star $ depends on the choice of test statistic.

METHOD= MULLERPETERSON computes the noncentrality as $\lambda = \nu _2 {\lambda ^{(\mr{MP})}}^\star $, where ${\lambda ^{(\mr{MP})}}^\star $ has the same form as $\lambda ^\star $ except that $\bphi ^\star $ is replaced by

\[ {\bphi ^{(\mr{MP})}}^\star = \frac{N}{(N - r_ X)} \bphi ^\star \]

Computed power is approximate for both methods when s > 1.

Hotelling-Lawley Trace (MTEST= HLT) When s > 1

If $N > r_ X + r_ M + 1$, then the denominator degrees of freedom for the Hotelling-Lawley trace are $\nu _2 = \nu _{2a}$,

\[ \nu _{2a} = 4 + (r_ L r_ M + 2)g \]

where

\[ g = \frac{(N - r_ X)^2 - (N - r_ X)(2r_ M+3) + r_ M(r_ M+3)}{(N - r_ X)(r_ L+r_ M+1) - (r_ L+2r_ M+r_ M^2-1)} \]

which is the same as $\nu _2^{(T_2)}$ in O’Brien and Shieh (1992) and is due to McKeon (1974).

If $N \le r_ X + r_ M + 1$, then $\nu _2 = \nu _{2b}$,

\[ \nu _{2b} = s ((N - r_ X) - r_ M - 1) + 2 \]

which is the same as both $\nu _2^{(T_1)}$ in O’Brien and Shieh (1992) and $\nu _2$ in Muller and Peterson (1984) and is due to Pillai and Samson (1959).

The primary noncentrality is

\[ \lambda ^\star = \sum _{i=1}^ s \phi _ i^\star \]

The sample size must satisfy

\[ N \ge r_ X + r_ M + 1 - 1/s \]

If $N > r_ X + r_ M + 1$, then the test statistic is

\[ F = \frac{U/\nu _1}{c/\nu _{2a}} \]

where

\begin{equation*} \begin{split} U & = \mr{trace}(\mb{E}^{-1} \mb{H}) \\ & = \sum _{i=1}^ s \phi _ i \end{split}\end{equation*}

and

\[ c = \frac{2 + (r_ L r_ M + 2)g}{N-r_ X-r_ M-1} \]

If $N \le r_ X + r_ M + 1$, then the test statistic is

\[ F = \frac{U/\nu _1}{s/\nu _{2b}} \]

Pillai’s Trace (MTEST= PT) When s > 1

The denominator degrees of freedom for Pillai’s trace are

\[ \nu _2 = s ((N - r_ X) + s - r_ M) \]

The primary noncentrality is

\[ \lambda ^\star = s \left( \frac{\sum _{i=1}^ s \frac{\phi _ i^\star }{1+\phi _ i^\star }}{s - \sum _{i=1}^ s \frac{\phi _ i^\star }{1+\phi _ i^\star }} \right) \]

The sample size must satisfy

\[ N \ge r_ X + r_ M + 1/s - s \]

The test statistic is

\[ F = \frac{V/\nu _1}{(s-V)/\nu _2} \]

where

\begin{equation*} \begin{split} V & = \mr{trace}\left(\mb{H}(\mb{H}+\mb{E})^{-1}\right) \\ & = \sum _{i=1}^ s \frac{\phi _ i}{1+\phi _ i} \end{split}\end{equation*}

Wilks’ Lambda (MTEST= WILKS) When s > 1

The denominator degrees of freedom for Wilks’ lambda are

\[ \nu _2 = t \left[ (N - r_ X) - 0.5(r_ M-r_ L+1) \right] - 0.5(r_ L r_ M - 2) \]

where

\begin{equation*} t = \begin{cases} 1 & \text {if } r_ L r_ M \le 3 \\ \left[ \frac{(r_ L r_ M)^2 - 4}{r_ L^2 + r_ M^2 - 5} \right]^\frac {1}{2} & \text {if } r_ L r_ M \ge 4 \end{cases}\end{equation*}

The primary noncentrality is

\[ \lambda ^\star = t \left[ \left(\prod _{i=1}^ s\left[ (1+\phi _ i^\star )^{-1} \right] \right)^{-\frac{1}{t}} - 1 \right] \]

The sample size must satisfy

\[ N \ge (1 + 0.5(r_ L r_ M - 2))/t + r_ X + (r_ M - r_ L + 1)/2 \]

The test statistic is

\[ F = \frac{(1-\Lambda ^{1/t})/\nu _1}{\Lambda ^{1/t}/\nu _2} \]

where

\begin{equation*} \begin{split} \Lambda & = \mr{det}(\mb{E})/\mr{det}(\mb{H} + \mb{E}) \\ & = \prod _{i=1}^ s\left[ (1+\phi _ i)^{-1} \right] \end{split}\end{equation*}
Univariate Tests

Power computations for univariate tests are based on Muller et al. (2007) and Muller and Barton (1989).

The test statistic is

\[ F = \frac{\mr{trace}(\mb{H}) / r_ L}{\mr{trace}(\mb{E}) / (N - r_ X)} \]

Critical value computations assume that under $H_0$, F is distributed as $F(\nu _1, \nu _2)$, where $\nu _1$ and $\nu _2$ depend on the choice of test.

The four tests for the univariate approach to repeated measures differ in their assumptions about the sphericity $\varepsilon $ of $\mb{E}^\star $,

\[ \varepsilon = \frac{\mr{trace}^2(\mb{E}^\star )}{r_ M \mr{trace}({\mb{E}^\star }^2)} \]

Power computations assume that under $H_ A$, F is distributed as $F(\nu _1^\star , \nu _2^\star , \lambda )$.

Formulas for $\nu _1$ and $\nu _2$ for each test and formulas for $\nu _1^\star $, $\nu _2^\star $, and $\lambda $ are given in the following subsections.

The power in each case is approximated as

\[ \mr{power} = P\left(F(\nu _1^\star , \nu _2^\star , \lambda ) \ge F_{1-\alpha }(\nu _1, \nu _2)\right) \]

Sample size is computed by inverting the power equation.

The sample size must be large enough to yield $\nu _1 > 0$, $\nu _1^\star > 0$, $\nu _2 \ge 1$, and $\nu _2^\star \ge 1$.

Because these univariate tests are biased, the achieved significance level might differ from the nominal significance level. The actual alpha is computed in the same way as the power, except that the noncentrality parameter $\lambda $ is set to 0.

Define $\bphi ^{(\mr{E})}$ as the vector of ordered eigenvalues of $\mb{E}^\star $, $\bphi ^{(\mr{E})} = \{  \phi _1^{(\mr{E})}, \ldots , \phi _{r_ M}^{(\mr{E})}\} $, where $\phi _1^{(\mr{E})} \ge \cdots \ge \phi _{r_ M}^{(\mr{E})}$, and define $\bgamma _ j^{(\mr{E})}$ as the jth eigenvector of $\mb{E}^\star $. Critical values and power computations are based on the following intermediate parameters:

\[ \omega _{*j} = N \left(\bgamma _ j^{(\mr{E})}\right)’ \mb{H}^\star \bgamma _ j^{(\mr{E})} / \phi _ j^{(\mr{E})} \]
\begin{align*} S_{t1} & = \sum _{j=1}^{r_ M} \phi _ j^{(\mr{E})} \\ S_{t2} & = \sum _{j=1}^{r_ M} \phi _ j^{(\mr{E})} \omega _{*j} \\ S_{t3} & = \sum _{j=1}^{r_ M} \left( \phi _ j^{(\mr{E})} \right)^2 \\ S_{t4} & = \sum _{j=1}^{r_ M} \left( \phi _ j^{(\mr{E})} \right)^2 \omega _{*j} \end{align*}
\begin{align*} R_{*1} & = \frac{r_ L S_{t3} + 2 S_{t4}}{r_ L S_{t1} + 2 S_{t2}} \\ R_{*2} & = \frac{S_{t3}}{S_{t1}} \end{align*}
\begin{align*} \mr{E}(t_1) & = 2 (N - r_ X) S_{t3} + (N - r_ X)^2 S_{t1}^2 \\ \mr{E}(t_2) & = (N - r_ X) ((N - r_ X) + 2) S_{t3} + 2 (N - r_ X) \sum _{j_1=2}^{r_ M} \sum _{j_2=1}^{j_1-1} \phi _{j_1}^{(\mr{E})} \phi _{j_2}^{(\mr{E})} \end{align*}

The degrees of freedom and noncentrality in the noncentral F approximation of the test statistic are computed as follows:

\[ \nu _1^\star = \frac{r_ L S_{t1}}{R_{*1}} \]
\[ \nu _2^\star = \frac{(N - r_ X) S_{t1}}{R_{*2}} \]
\[ \lambda = \frac{S_{t2}}{R_{*1}} \]

Uncorrected Test

The uncorrected test assumes sphericity $\varepsilon = 1$, in which case the null F distribution is exact, with the following degrees of freedom:

\begin{align*} \nu _1 & = r_ L r_ M \\ \nu _2 & = r_ M (N - r_ X) \end{align*}

Greenhouse-Geisser Adjustment (MTEST= UNCORR)

The Greenhouse-Geisser adjustment to the uncorrected test reduces degrees of freedom by the MLE $\hat{\varepsilon }$ of the sphericity,

\[ \hat{\varepsilon } = \frac{\mr{trace}^2(\mb{E})}{r_ M \mr{trace}(\mb{E}^2)} \]

An approximation for the expected value of $\hat{\varepsilon }$ is used to compute the degrees of freedom for the null F distribution,

\begin{align*} \nu _1 & = r_ L r_ M \mr{E}(\hat{\varepsilon }) \\ \nu _2 & = r_ M (N - r_ X) \mr{E}(\hat{\varepsilon }) \end{align*}

where

\[ \mr{E}(\hat{\varepsilon }) = \frac{\mr{E}(t_1)}{r_ M \mr{E}(t_2)} \]

Huynh-Feldt Adjustments (MTEST= HF)

The Huynh-Feldt adjustment reduces degrees of freedom by a nearly unbiased estimate $\tilde{\varepsilon }$ of the sphericity,

\begin{equation*} \tilde{\varepsilon } = \begin{cases} \frac{N r_ M \hat{\varepsilon } - 2}{r_ M \left[ (N - r_ X) - r_ M \hat{\varepsilon } \right]} & \text {if UEPSDEF=HF}\\ \frac{(N-r_ X+1) r_ M \hat{\varepsilon } - 2}{r_ M \left[ (N - r_ X) - r_ M \hat{\varepsilon } \right]} & \text {if UEPSDEF=HFL}\\ \left(\frac{(\nu _ a-2)(\nu _ a-4)}{\nu _ a^2}\right) \left(\frac{(N-r_ X+1) r_ M \hat{\varepsilon } - 2}{r_ M \left[ (N - r_ X) - r_ M \hat{\varepsilon } \right]}\right) & \text {if UEPSDEF=CM} \end{cases}\end{equation*}

where

\[ \nu _ a = (N-r_ X-1) + (N-r_ X)(N-r_ X-1)/2 \]

The value of $\tilde{\varepsilon }$ is truncated if necessary to be at least $1/r_ M$ and at most 1.

An approximation for the expected value of $\tilde{\varepsilon }$ is used to compute the degrees of freedom for the null F distribution,

\begin{align*} \nu _1 & = r_ L r_ M \mr{E_ t}(\tilde{\varepsilon }) \\ \nu _2 & = r_ M (N - r_ X) \mr{E_ t}(\tilde{\varepsilon }) \end{align*}

where

\[ \mr{E_ t}(\tilde{\varepsilon }) = \min (\max (\mr{E}(\tilde{\varepsilon }), 1/r_ M), 1) \]

and

\begin{equation*} \mr{E}(\tilde{\varepsilon }) = \begin{cases} \frac{N \mr{E}(t_1) - 2 \mr{E}(t_2)}{r_ M \left[ (N - r_ X) \mr{E}(t_2) - \mr{E}(t_1) \right]} & \text {if UEPSDEF=HF}\\ \frac{(N-r_ X+1) \mr{E}(t_1) - 2 \mr{E}(t_2)}{r_ M \left[ (N - r_ X) \mr{E}(t_2) - \mr{E}(t_1) \right]} & \text {if UEPSDEF=HFL}\\ \left(\frac{(\nu _ a-2)(\nu _ a-4)}{\nu _ a^2}\right) \left(\frac{(N-r_ X+1) \mr{E}(t_1) - 2 \mr{E}(t_2)}{r_ M \left[ (N - r_ X) \mr{E}(t_2) - \mr{E}(t_1) \right]}\right) & \text {if UEPSDEF=CM} \end{cases}\end{equation*}

Box Conservative Test (MTEST= BOX)

The Box conservative test assumes the worst case for sphericity, $\varepsilon = 1/r_ M$, leading to the following degrees of freedom for the null F distribution:

\begin{align*} \nu _1 & = r_ L \\ \nu _2 & = (N - r_ X) \end{align*}