The POWER Procedure

Analyses in the TWOSAMPLEWILCOXON Statement

Wilcoxon-Mann-Whitney Test for Comparing Two Distributions (TEST=WMW)

The power approximation in this section is applicable to the Wilcoxon-Mann-Whitney (WMW) test as invoked with the WILCOXON option in the PROC NPAR1WAY statement of the NPAR1WAY procedure. The approximation is based on O’Brien and Castelloe (2006) and an estimator called $\widehat{\mr {WMW}_\mr {odds}}$. See O’Brien and Castelloe (2006) for a definition of $\widehat{\mr {WMW}_\mr {odds}}$, which need not be derived in detail here for purposes of explaining the power formula.

Let $Y_1$ and $Y_2$ be independent observations from any two distributions that you want to compare using the WMW test. For purposes of deriving the asymptotic distribution of $\widehat{\mr {WMW}_\mr {odds}}$ (and consequently the power computation as well), these distributions must be formulated as ordered categorical (ordinal) distributions.

If a distribution is continuous, it can be discretized using a large number of categories with negligible loss of accuracy. Each nonordinal distribution is divided into b categories, where b is the value of the NBINS parameter, with breakpoints evenly spaced on the probability scale. That is, each bin contains an equal probability 1/b for that distribution. Then the breakpoints across both distributions are pooled to form a collection of C bins (heretofore called categories), and the probabilities of bin membership for each distribution are recalculated. The motivation for this method of binning is to avoid degenerate representations of the distributions—that is, small handfuls of large probabilities among mostly empty bins—as can be caused by something like an evenly spaced grid across raw values rather than probabilities.

After the discretization process just mentioned, there are now two ordinal distributions, each with a set of probabilities across a common set of C ordered categories. For simplicity of notation, assume (without loss of generality) the response values to be $1, \ldots , C$. Represent the conditional probabilities as

\[  \tilde{p}_{ij} = \mr {Prob}\left(Y_ i = j \mid \mr {group} = i\right), i \in \{ 1, 2\}  \quad \mbox{and} \quad j \in \{ 1, \ldots , C\}   \]

and the group allocation weights as

\[  w_ i = \frac{n_ i}{N} = \mr {Prob}\left(\mr {group} = i\right), \quad i \in \{ 1, 2\}   \]

The joint probabilities can then be calculated simply as

\[  p_{ij} = \mr {Prob}\left(\mr {group} = i, Y_ i = j \right) = w_ i \tilde{p}_{ij}, i \in \{ 1, 2\}  \quad \mbox{and} \quad j \in \{ 1, \ldots , C\}   \]

The next step in the power computation is to compute the probabilities that a randomly chosen pair of observations from the two groups is concordant, discordant, or tied. It is useful to define these probabilities as functions of the terms $Rs_{ij}$ and $Rd_{ij}$, defined as follows, where Y is a random observation drawn from the joint distribution across groups and categories:

$\displaystyle  Rs_{ij}  $
$\displaystyle = \mr {Prob}\left(Y \mbox{ is concordant with cell} (i,j)\right) + \frac{1}{2} \mr {Prob}\left(Y \mbox{ is tied with cell} (i,j)\right)  $
$\displaystyle  $
$\displaystyle = \mr {Prob}\left((\mr {group} < i \mbox{ and } Y < j) \mbox{ or } (\mr {group} > i \mbox{ and } Y > j)\right) +  $
$\displaystyle  $
$\displaystyle  \quad \frac{1}{2} \mr {Prob}\left(\mr {group} \ne i \mbox{ and } Y = j\right)  $
$\displaystyle  $
$\displaystyle = \sum _{g=1}^2 \sum _{c=1}^ C w_ g \tilde{p}_{gc} \left[\mr {I}_{(g-i)(c-j) > 0} + \frac{1}{2} \mr {I}_{g \ne i, c = j} \right]  $

and

$\displaystyle  Rd_{ij}  $
$\displaystyle = \mr {Prob}\left(Y \mbox{ is discordant with cell} (i,j)\right) + \frac{1}{2} \mr {Prob}\left(Y \mbox{ is tied with cell} (i,j)\right)  $
$\displaystyle  $
$\displaystyle = \mr {Prob}\left((\mr {group} < i \mbox{ and } Y > j) \mbox{ or } (\mr {group} > i \mbox{ and } Y < j)\right) +  $
$\displaystyle  $
$\displaystyle  \quad \frac{1}{2} \mr {Prob}\left(\mr {group} \ne i \mbox{ and } Y = j\right)  $
$\displaystyle  $
$\displaystyle = \sum _{g=1}^2 \sum _{c=1}^ C w_ g \tilde{p}_{gc} \left[\mr {I}_{(g-i)(c-j) < 0} + \frac{1}{2} \mr {I}_{g \ne i, c = j} \right]  $

For an independent random draw $Y_1, Y_2$ from the two distributions,

$\displaystyle  P_ c  $
$\displaystyle = \mr {Prob}\left(Y_1, Y_2 \mbox{ concordant}\right) + \frac{1}{2} \mr {Prob}\left(Y_1, Y_2 \mbox{ tied}\right)  $
$\displaystyle  $
$\displaystyle = \sum _{i=1}^2 \sum _{j=1}^ C w_ i \tilde{p}_{ij} Rs_{ij}  $

and

$\displaystyle  P_ d  $
$\displaystyle = \mr {Prob}\left(Y_1, Y_2 \mbox{ discordant}\right) + \frac{1}{2} \mr {Prob}\left(Y_1, Y_2 \mbox{ tied}\right)  $
$\displaystyle  $
$\displaystyle = \sum _{i=1}^2 \sum _{j=1}^ C w_ i \tilde{p}_{ij} Rd_{ij}  $

Then

\[  \mr {WMW}_\mr {odds} = \frac{P_ c}{P_ d}  \]

Proceeding to compute the theoretical standard error associated with $\mr {WMW}_\mr {odds}$ (that is, the population analogue to the sample standard error),

$\displaystyle  \mr {SE}(\mr {WMW}_\mr {odds}) = \frac{2}{P_ d} \left[ \sum _{i=1}^2 \sum _{j=1}^ C w_ i \tilde{p}_{ij} \left(\mr {WMW}_\mr {odds} Rd_{ij} - Rs_{ij} \right)^2 /N \right]^{\frac{1}{2}}  $

Converting to the natural log scale and using the delta method,

\[  \mr {SE}(\log (\mr {WMW}_\mr {odds})) = \frac{\mr {SE}(\mr {WMW}_\mr {odds})}{\mr {WMW}_\mr {odds}}  \]

The next step is to produce a smoothed version of the $2 \times C$ cell probabilities that conforms to the null hypothesis of the Wilcoxon-Mann-Whitney test (in other words, independence in the $2 \times C$ contingency table of probabilities). Let $\mr {SE}_{H_0}(\log (\mr {WMW}_\mr {odds}))$ denote the theoretical standard error of $\log (\mr {WMW}_\mr {odds})$ assuming $H_0$.

Finally we have all of the terms needed to compute the power, using the noncentral chi-square and normal distributions:

$\displaystyle  \mr {power} =  $
$\displaystyle  \left\{  \begin{array}{l} P\left(Z \ge \frac{\mr {SE}_{H_0}(\log (\mr {WMW}_\mr {odds}))}{\mr {SE}(\log (\mr {WMW}_\mr {odds}))} z_{1-\alpha } - \delta ^\star N^\frac {1}{2} \right), \quad \mbox{upper one-sided} \\ P\left(Z \le \frac{\mr {SE}_{H_0}(\log (\mr {WMW}_\mr {odds}))}{\mr {SE}(\log (\mr {WMW}_\mr {odds}))} z_{\alpha } - \delta ^\star N^\frac {1}{2} \right), \quad \mbox{lower one-sided} \\ P\left(\chi ^2(1, (\delta ^\star )^2 N) \ge \left[ \frac{\mr {SE}_{H_0}(\log (\mr {WMW}_\mr {odds}))}{\mr {SE}(\log (\mr {WMW}_\mr {odds}))} \right]^2 \chi ^2_{1-\alpha }(1)\right), \quad \mbox{two-sided} \\ \end{array} \right.  $

where

\[  \delta ^\star = \frac{\log (\mr {WMW}_\mr {odds})}{N^\frac {1}{2} \mr {SE}(\log (\mr {WMW}_\mr {odds}))}  \]

is the primary noncentrality—that is, the effect size that quantifies how much the two conjectured distributions differ. Z is a standard normal random variable, $\chi ^2(\mi {df}, \mi {nc})$ is a noncentral $\chi ^2$ random variable with degrees of freedom df and noncentrality nc, and N is the total sample size.