The POWER Procedure

Analyses in the PAIREDMEANS Statement

Paired t Test (TEST=DIFF)

The hypotheses for the paired t test are

\begin{align*}  H_{0}\colon & \mu _\mr {diff}=\mu _0 \\ H_{1}\colon & \left\{  \begin{array}{ll} \mu _\mr {diff} \ne \mu _0, &  \mbox{two-sided} \\ \mu _\mr {diff} > \mu _0, &  \mbox{upper one-sided} \\ \mu _\mr {diff} < \mu _0, &  \mbox{lower one-sided} \\ \end{array} \right. \\ \end{align*}

The test assumes normally distributed data and requires $N \ge 2$. The test statistics are

\begin{align*}  t & = N^\frac {1}{2} \left( \frac{\bar{d}-\mu _0}{s_ d} \right) \quad \thicksim t(N-1, \delta ) \\ t^2 & \thicksim F(1, N-1, \delta ^2) \\ \end{align*}

where $\bar{d}$ and $s_ d$ are the sample mean and standard deviation of the differences and

\[  \delta = N^\frac {1}{2} \left( \frac{\mu _\mr {diff}-\mu _0}{\sigma _\mr {diff}} \right)  \]

and

\[  \sigma _\mr {diff} = \left(\sigma _1^2 + \sigma _2^2 - 2\rho \sigma _1\sigma _2\right)^\frac {1}{2}  \]

The test is

\[  \mbox{Reject} \quad H_0 \quad \mbox{if} \left\{  \begin{array}{ll} t^2 \ge F_{1-\alpha }(1, N-1), &  \mbox{two-sided} \\ t \ge t_{1-\alpha }(N-1), &  \mbox{upper one-sided} \\ t \le t_{\alpha }(N-1), &  \mbox{lower one-sided} \\ \end{array} \right.  \]

Exact power computations for t tests are given in O’Brien and Muller (1993, Section 8.2.2):

\begin{align*}  \mr {power} & = \left\{  \begin{array}{ll} P\left(F(1, N-1, \delta ^2) \ge F_{1-\alpha }(1, N-1)\right), &  \mbox{two-sided} \\ P\left(t(N-1, \delta ) \ge t_{1-\alpha }(N-1)\right), &  \mbox{upper one-sided} \\ P\left(t(N-1, \delta ) \le t_{\alpha }(N-1)\right), &  \mbox{lower one-sided} \\ \end{array} \right. \\ \end{align*}
Paired t Test for Mean Ratio with Lognormal Data (TEST=RATIO)

The lognormal case is handled by reexpressing the analysis equivalently as a normality-based test on the log-transformed data, by using properties of the lognormal distribution as discussed in Johnson, Kotz, and Balakrishnan (1994, Chapter 14). The approaches in the section Paired t Test (TEST=DIFF) then apply.

In contrast to the usual t test on normal data, the hypotheses with lognormal data are defined in terms of geometric means rather than arithmetic means.

The hypotheses for the paired t test with lognormal pairs $\{ Y_1, Y_2\} $ are

\begin{align*}  H_{0}\colon & \frac{\gamma _2}{\gamma _1} = \gamma _0 \\ H_{1}\colon & \left\{  \begin{array}{ll} \frac{\gamma _2}{\gamma _1} \ne \gamma _0, &  \mbox{two-sided} \\ \frac{\gamma _2}{\gamma _1} > \gamma _0, &  \mbox{upper one-sided} \\ \frac{\gamma _2}{\gamma _1} < \gamma _0, &  \mbox{lower one-sided} \\ \end{array} \right. \\ \end{align*}

Let $\mu _1^\star $, $\mu _2^\star $, $\sigma _1^\star $, $\sigma _2^\star $, and $\rho ^\star $ be the (arithmetic) means, standard deviations, and correlation of the bivariate normal distribution of the log-transformed data $\{ \log Y_1, \log Y_2\} $. The hypotheses can be rewritten as follows:

\begin{align*}  H_{0}\colon & \mu _2^\star - \mu _1^\star = \log (\gamma _0) \\ H_{1}\colon & \left\{  \begin{array}{ll} \mu _2^\star - \mu _1^\star \ne \log (\gamma _0), &  \mbox{two-sided} \\ \mu _2^\star - \mu _1^\star > \log (\gamma _0), &  \mbox{upper one-sided} \\ \mu _2^\star - \mu _1^\star < \log (\gamma _0), &  \mbox{lower one-sided} \\ \end{array} \right. \\ \end{align*}

where

\begin{align*}  \mu _1^\star & = \log \gamma _1 \\ \mu _2^\star & = \log \gamma _2 \\ \sigma _1^\star & = \left[ \log (\mr {CV}_1^2 + 1) \right]^\frac {1}{2} \\ \sigma _2^\star & = \left[ \log (\mr {CV}_2^2 + 1) \right]^\frac {1}{2} \\ \rho ^\star & = \frac{\log \left\{  \rho \mr {CV}_1 \mr {CV}_2 + 1 \right\} }{\sigma _1^{\star } \sigma _2^{\star }} \\ \end{align*}

and $\mr {CV}_1$, $\mr {CV}_2$, and $\rho $ are the coefficients of variation and the correlation of the original untransformed pairs $\{ Y_1, Y_2\} $. The conversion from $\rho $ to $\rho ^\star $ is given by equation (44.36) on page 27 of Kotz, Balakrishnan, and Johnson (2000) and due to Jones and Miller (1966).

The valid range of $\rho $ is restricted to $(\rho _ L, \rho _ U)$, where

\begin{align*}  \rho _ L & = \frac{\exp \left(-\left[ \log (\mr {CV}_1^2+1) \log (\mr {CV}_2^2+1) \right]^\frac {1}{2} \right) - 1}{\mr {CV}_1 \mr {CV}_2} \\ \rho _ U & = \frac{\exp \left(\left[ \log (\mr {CV}_1^2+1) \log (\mr {CV}_2^2+1) \right]^\frac {1}{2}\right) - 1}{\mr {CV}_1 \mr {CV}_2} \end{align*}

These bounds are computed from equation (44.36) on page 27 of Kotz, Balakrishnan, and Johnson (2000) by observing that $\rho $ is a monotonically increasing function of $\rho ^\star $ and plugging in the values $\rho ^\star =-1$ and $\rho ^\star =1$. Note that when the coefficients of variation are equal ($\mr {CV}_1 = \mr {CV}_2 = \mr {CV}$), the bounds simplify to

\begin{align*}  \rho _ L & = \frac{-1}{\mr {CV}^2 + 1} \\ \rho _ U & = 1 \end{align*}

The test assumes lognormally distributed data and requires $N \ge 2$. The power is

\[  \mr {power} = \left\{  \begin{array}{ll} P\left(F(1, N-1, \delta ^2) \ge F_{1-\alpha }(1, N-1)\right), &  \mbox{two-sided} \\ P\left(t(N-1, \delta ) \ge t_{1-\alpha }(N-1)\right), &  \mbox{upper one-sided} \\ P\left(t(N-1, \delta ) \le t_{\alpha }(N-1)\right), &  \mbox{lower one-sided} \\ \end{array} \right.  \]

where

\[  \delta = N^\frac {1}{2} \left( \frac{\mu _1^\star -\mu _2^\star -\log (\gamma _0)}{\sigma ^\star } \right)  \]

and

\[  \sigma ^\star = \left(\sigma _1^{\star 2} + \sigma _2^{\star 2} - 2\rho ^\star \sigma _1^\star \sigma _2^\star \right)^\frac {1}{2}  \]
Additive Equivalence Test for Mean Difference with Normal Data (TEST=EQUIV_DIFF)

The hypotheses for the equivalence test are

\begin{align*}  H_{0}\colon & \mu _\mr {diff} < \theta _ L \quad \mbox{or}\quad \mu _\mr {diff} > \theta _ U\\ H_{1}\colon & \theta _ L \le \mu _\mr {diff} \le \theta _ U \end{align*}

The analysis is the two one-sided tests (TOST) procedure of Schuirmann (1987). The test assumes normally distributed data and requires $N \ge 2$. Phillips (1990) derives an expression for the exact power assuming a two-sample balanced design; the results are easily adapted to a paired design:

\begin{align*}  \mr {power} & = Q_{N-1}\left((-t_{1-\alpha }(N-1)),\frac{\mu _\mr {diff}-\theta _ U}{\sigma _\mr {diff} N^{-\frac{1}{2}}};0,\frac{(N-1)^\frac {1}{2}(\theta _ U-\theta _ L)}{2\sigma _\mr {diff} N^{-\frac{1}{2}}(t_{1-\alpha }(N-1))}\right) - \\ &  \quad Q_{N-1}\left((t_{1-\alpha }(N-1)),\frac{\mu _\mr {diff}-\theta _ L}{\sigma _\mr {diff} N^{-\frac{1}{2}}};0,\frac{(N-1)^\frac {1}{2}(\theta _ U-\theta _ L)}{2\sigma _\mr {diff} N^{-\frac{1}{2}}(t_{1-\alpha }(N-1))}\right) \end{align*}

where

\[  \sigma _\mr {diff} = \left(\sigma _1^2 + \sigma _2^2 - 2\rho \sigma _1\sigma _2\right)^\frac {1}{2}  \]

and $Q_\cdot (\cdot ,\cdot ;\cdot ,\cdot )$ is Owen’s Q function, defined in the section Common Notation.

Multiplicative Equivalence Test for Mean Ratio with Lognormal Data (TEST=EQUIV_RATIO)

The lognormal case is handled by reexpressing the analysis equivalently as a normality-based test on the log-transformed data, by using properties of the lognormal distribution as discussed in Johnson, Kotz, and Balakrishnan (1994, Chapter 14). The approaches in the section Additive Equivalence Test for Mean Difference with Normal Data (TEST=EQUIV_DIFF) then apply.

In contrast to the additive equivalence test on normal data, the hypotheses with lognormal data are defined in terms of geometric means rather than arithmetic means.

The hypotheses for the equivalence test are

\begin{align*}  H_{0}\colon & \frac{\gamma _ T}{\gamma _ R} \le \theta _ L \quad \mbox{or}\quad \frac{\gamma _ T}{\gamma _ R} \ge \theta _ U\\ H_{1}\colon & \theta _ L < \frac{\gamma _ T}{\gamma _ R} < \theta _ U \end{align*}
\[  \mbox{where}\quad 0 < \theta _ L < \theta _ U  \]

The analysis is the two one-sided tests (TOST) procedure of Schuirmann (1987) on the log-transformed data. The test assumes lognormally distributed data and requires $N \ge 2$. Diletti, Hauschke, and Steinijans (1991) derive an expression for the exact power assuming a crossover design; the results are easily adapted to a paired design:

\begin{align*}  \mr {power} & = Q_{N-1}\left((-t_{1-\alpha }(N-1)), \frac{\log \left(\frac{\gamma _ T}{\gamma _ R}\right)-\log (\theta _ U)}{\sigma ^\star N^{-\frac{1}{2}}}; 0,\frac{(N-1)^\frac {1}{2}(\log (\theta _ U)-\log (\theta _ L))}{2\sigma ^\star N^{-\frac{1}{2}}(t_{1-\alpha }(N-1))}\right) - \\ &  \quad Q_{N-1}\left((t_{1-\alpha }(N-1)), \frac{\log \left(\frac{\gamma _ T}{\gamma _ R}\right)-\log (\theta _ L)}{\sigma ^\star N^{-\frac{1}{2}}}; 0,\frac{(N-1)^\frac {1}{2}(\log (\theta _ U)-\log (\theta _ L))}{2\sigma ^\star N^{-\frac{1}{2}}(t_{1-\alpha }(N-1))}\right) \end{align*}

where $\sigma ^\star $ is the standard deviation of the differences between the log-transformed pairs (in other words, the standard deviation of $\log (Y_ T) - \log (Y_ R)$, where $Y_ T$ and $Y_ R$ are observations from the treatment and reference, respectively), computed as

\begin{align*}  \sigma ^\star & = \left(\sigma _ R^{\star 2} + \sigma _ T^{\star 2} - 2\rho ^\star \sigma _ R^\star \sigma _ T^\star \right)^\frac {1}{2}\\ \sigma _ R^\star & = \left[ \log (\mr {CV}_ R^2 + 1) \right]^\frac {1}{2} \\ \sigma _ T^\star & = \left[ \log (\mr {CV}_ T^2 + 1) \right]^\frac {1}{2} \\ \rho ^\star & = \frac{\log \left\{  \rho \mr {CV}_ R \mr {CV}_ T + 1 \right\} }{\sigma _ R^{\star } \sigma _ T^{\star }} \\ \end{align*}

where $\mr {CV}_ R$, $\mr {CV}_ T$, and $\rho $ are the coefficients of variation and the correlation of the original untransformed pairs $\{ Y_ T, Y_ R\} $, and $Q_\cdot (\cdot ,\cdot ;\cdot ,\cdot )$ is Owen’s Q function. The conversion from $\rho $ to $\rho ^\star $ is given by equation (44.36) on page 27 of Kotz, Balakrishnan, and Johnson (2000) and due to Jones and Miller (1966), and Owen’s Q function is defined in the section Common Notation.

The valid range of $\rho $ is restricted to $(\rho _ L, \rho _ U)$, where

\begin{align*}  \rho _ L & = \frac{\exp \left(-\left[ \log (\mr {CV}_ R^2+1) \log (\mr {CV}_ T^2+1) \right]^\frac {1}{2} \right) - 1}{\mr {CV}_ R \mr {CV}_ T} \\ \rho _ U & = \frac{\exp \left(\left[ \log (\mr {CV}_ R^2+1) \log (\mr {CV}_ T^2+1) \right]^\frac {1}{2}\right) - 1}{\mr {CV}_ R \mr {CV}_ T} \end{align*}

These bounds are computed from equation (44.36) on page 27 of Kotz, Balakrishnan, and Johnson (2000) by observing that $\rho $ is a monotonically increasing function of $\rho ^\star $ and plugging in the values $\rho ^\star =-1$ and $\rho ^\star =1$. Note that when the coefficients of variation are equal ($\mr {CV}_ R = \mr {CV}_ T = \mr {CV}$), the bounds simplify to

\begin{align*}  \rho _ L & = \frac{-1}{\mr {CV}^2 + 1} \\ \rho _ U & = 1 \end{align*}
Confidence Interval for Mean Difference (CI=DIFF)

This analysis of precision applies to the standard t-based confidence interval:

\[  \begin{array}{ll} \left[ \bar{d} - t_{1-\frac{\alpha }{2}}(N-1) \frac{s_ d}{\sqrt {N}}, \quad \bar{d} + t_{1-\frac{\alpha }{2}}(N-1) \frac{s_ d}{\sqrt {N}} \right], &  \mbox{two-sided} \\ \left[ \bar{d} - t_{1-\alpha }(N-1) \frac{s_ d}{\sqrt {N}}, \quad \infty \right), &  \mbox{upper one-sided} \\ \left( -\infty , \quad \bar{d} + t_{1-\alpha }(N-1) \frac{s_ d}{\sqrt {N}} \right], &  \mbox{lower one-sided} \\ \end{array}  \]

where $\bar{d}$ and $s_ d$ are the sample mean and standard deviation of the differences. The half-width is defined as the distance from the point estimate $\bar{d}$ to a finite endpoint,

\[  \mbox{half-width} = \left\{  \begin{array}{ll} t_{1-\frac{\alpha }{2}}(N-1) \frac{s_ d}{\sqrt {N}}, &  \mbox{two-sided} \\ t_{1-\alpha }(N-1) \frac{s_ d}{\sqrt {N}}, &  \mbox{one-sided} \\ \end{array} \right.  \]

A valid conference interval captures the true mean difference. The exact probability of obtaining at most the target confidence interval half-width h, unconditional or conditional on validity, is given by Beal (1989):

\begin{align*}  \mbox{Pr(half-width $\le h$)} & = \left\{  \begin{array}{ll} P\left( \chi ^2(N-1) \le \frac{h^2 N(N-1)}{\sigma ^2_\mr {diff}(t^2_{1-\frac{\alpha }{2}}(N-1))} \right), &  \mbox{two-sided} \\ P\left( \chi ^2(N-1) \le \frac{h^2 N(N-1)}{\sigma ^2_\mr {diff}(t^2_{1-\alpha }(N-1))} \right), &  \mbox{one-sided} \\ \end{array} \right. \\ \begin{array}{r} \mbox{Pr(half-width $\le h$ |} \\ \mbox{validity)} \end{array}& = \left\{  \begin{array}{ll} \left(\frac{1}{1-\alpha }\right) 2 \left[ Q_{N-1}\left((t_{1-\frac{\alpha }{2}}(N-1)),0; \right. \right. \\ \quad \left. \left. 0,b_1\right) - Q_{N-1}(0,0;0,b_1)\right], &  \mbox{two-sided} \\ \left(\frac{1}{1-\alpha }\right) Q_{N-1}\left((t_{1-\alpha }(N-1)),0;0,b_1\right), &  \mbox{one-sided} \\ \end{array} \right. \\ \end{align*}

where

\begin{align*}  \sigma _\mr {diff} & = \left(\sigma _1^2 + \sigma _2^2 - 2\rho \sigma _1\sigma _2\right)^\frac {1}{2}\\ b_1 & = \frac{h(N-1)^\frac {1}{2}}{\sigma _\mr {diff}(t_{1-\frac{\alpha }{c}}(N-1))N^{-\frac{1}{2}}} \\ c & = \mbox{number of sides} \end{align*}

and $Q_\cdot (\cdot ,\cdot ;\cdot ,\cdot )$ is Owen’s Q function, defined in the section Common Notation.

A quality confidence interval is both sufficiently narrow (half-width $\le h$) and valid:

\begin{align*}  \mbox{Pr(quality)} & = \mbox{Pr(half-width $\le h$ and validity)} \\ & = \mbox{Pr(half-width $\le h$ | validity)($1-\alpha $)} \end{align*}