The POWER Procedure

Analyses in the ONESAMPLEFREQ Statement

Exact Test of a Binomial Proportion (TEST=EXACT)

Let X be distributed as $\mr{Bin}(N, p)$. The hypotheses for the test of the proportion p are as follows:

\begin{align*} H_{0}\colon & p=p_0 \\ H_{1}\colon & \left\{ \begin{array}{ll} p \ne p_0, & \mbox{two-sided} \\ p > p_0, & \mbox{upper one-sided} \\ p < p_0, & \mbox{lower one-sided} \\ \end{array} \right. \\ \end{align*}

The exact test assumes binomially distributed data and requires $N \ge 1$ and $0 < p_0 < 1$. The test statistic is

\[ X = \mbox{number of successes} \thicksim \mr{Bin}(N, p) \]

The significance probability $\alpha $ is split symmetrically for two-sided tests, in the sense that each tail is filled with as much as possible up to $\alpha / 2$.

Exact power computations are based on the binomial distribution and computing formulas such as the following from Johnson, Kotz, and Kemp (1992, equation 3.20):

\[ P(X \ge C | N, p) = P \left(F_{\nu _1, \nu _2} \le \frac{\nu _2 p}{\nu _1 (1-p)} \right) \quad \mbox{where } \nu _1 = 2C \mbox{ and } \nu _2 = 2(N-C+1) \]

Let $C_ L$ and $C_ U$ denote lower and upper critical values, respectively. Let $\alpha _ a$ denote the achieved (actual) significance level, which for two-sided tests is the sum of the favorable major tail ($\alpha _ M$) and the opposite minor tail ($\alpha _ m$).

For the upper one-sided case,

\begin{align*} C_ U & = \min \{ C: P(X \ge C | p_0) \le \alpha \} \\ \mbox{Reject } H_0 & \mbox{ if } \; X \ge C_ U \\ \alpha _ a & = P(X \ge C_ U | p_0) \\ \mr{power} & = P(X \ge C_ U | p) \end{align*}

For the lower one-sided case,

\begin{align*} C_ L & = \max \{ C: P(X \le C | p_0) \le \alpha \} \\ \mbox{Reject } H_0 & \mbox{ if } \; X \le C_ L \\ \alpha _ a & = P(X \le C_ L | p_0) \\ \mr{power} & = P(X \le C_ L | p) \end{align*}

For the two-sided case,

\begin{align*} C_ L & = \max \{ C: P(X \le C | p_0) \le \frac{\alpha }{2}\} \\ C_ U & = \min \{ C: P(X \ge C | p_0) \le \frac{\alpha }{2}\} \\ \mbox{Reject } H_0 & \mbox{ if } \; X \le C_ L \, \mbox{or} \, X \ge C_ U \\ \alpha _ a & = P(X \le C_ L \, \mbox{or} \, X \ge C_ U | p_0) \\ \mr{power} & = P(X \le C_ L \, \mbox{or} \, X \ge C_ U | p) \end{align*}
z Test for Binomial Proportion Using Null Variance (TEST=Z VAREST=NULL)

For the normal approximation test, the test statistic is

\[ Z(X) = \frac{X - N p_0}{\left[ N p_0(1-p_0) \right]^\frac {1}{2}} \]

For the METHOD=EXACT option, the computations are the same as described in the section Exact Test of a Binomial Proportion (TEST=EXACT) except for the definitions of the critical values.

For the upper one-sided case,

\begin{align*} C_ U & = \min \{ C: Z(C) \ge z_{1-\alpha }\} \\ \end{align*}

For the lower one-sided case,

\begin{align*} C_ L & = \max \{ C: Z(C) \le z_\alpha \} \\ \end{align*}

For the two-sided case,

\begin{align*} C_ L & = \max \{ C: Z(C) \le z_\frac {\alpha }{2}\} \\ C_ U & = \min \{ C: Z(C) \ge z_{1-\frac{\alpha }{2}}\} \\ \end{align*}

For the METHOD=NORMAL option, the test statistic $Z(X)$ is assumed to have the normal distribution

\[ \mr{N}\left(\frac{N^{\frac{1}{2}}(p - p_0)}{\left[ p_0(1-p_0) \right]^\frac {1}{2}}, \frac{p(1-p)}{p_0(1-p_0)}\right) \]

The approximate power is computed as

\begin{align*} \mr{power} & = \left\{ \begin{array}{ll} \Phi \left( \frac{z_\alpha + \sqrt {N} \frac{p - p_0}{\sqrt {p_0(1-p_0)}} }{\sqrt {\frac{p(1-p)}{p_0(1-p_0)}}} \right), & \mbox{upper one-sided} \\ \Phi \left( \frac{z_\alpha - \sqrt {N} \frac{p - p_0}{\sqrt {p_0(1-p_0)}} }{\sqrt {\frac{p(1-p)}{p_0(1-p_0)}}} \right), & \mbox{lower one-sided} \\ \Phi \left( \frac{z_\frac {\alpha }{2} + \sqrt {N} \frac{p - p_0}{\sqrt {p_0(1-p_0)}} }{\sqrt {\frac{p(1-p)}{p_0(1-p_0)}}} \right) + \Phi \left( \frac{z_\frac {\alpha }{2} - \sqrt {N} \frac{p - p_0}{\sqrt {p_0(1-p_0)}} }{\sqrt {\frac{p(1-p)}{p_0(1-p_0)}}} \right), & \mbox{two-sided} \\ \end{array} \right. \\ \end{align*}

The approximate sample size is computed in closed form for the one-sided cases by inverting the power equation,

\[ N = \left(\frac{z_\mr {power} \sqrt {p(1-p)} + z_{1-\alpha } \sqrt {p_0(1-p_0)}}{p-p_0} \right)^2 \]

and by numerical inversion for the two-sided case.

z Test for Binomial Proportion Using Sample Variance (TEST=Z VAREST=SAMPLE)

For the normal approximation test using the sample variance, the test statistic is

\[ Z_ s(X) = \frac{X - N p_0}{\left[ N \hat{p}(1-\hat{p}) \right]^\frac {1}{2}} \]

where $\hat{p} = X/N$.

For the METHOD=EXACT option, the computations are the same as described in the section Exact Test of a Binomial Proportion (TEST=EXACT) except for the definitions of the critical values.

For the upper one-sided case,

\begin{align*} C_ U & = \min \{ C: Z_ s(C) \ge z_{1-\alpha }\} \\ \end{align*}

For the lower one-sided case,

\begin{align*} C_ L & = \max \{ C: Z_ s(C) \le z_\alpha \} \\ \end{align*}

For the two-sided case,

\begin{align*} C_ L & = \max \{ C: Z_ s(C) \le z_\frac {\alpha }{2}\} \\ C_ U & = \min \{ C: Z_ s(C) \ge z_{1-\frac{\alpha }{2}}\} \\ \end{align*}

For the METHOD=NORMAL option, the test statistic $Z_ s(X)$ is assumed to have the normal distribution

\[ \mr{N}\left(\frac{N^{\frac{1}{2}}(p - p_0)}{\left[ p(1-p) \right]^\frac {1}{2}}, 1 \right) \]

(see Chow, Shao, and Wang (2003, p. 82)).

The approximate power is computed as

\begin{align*} \mr{power} & = \left\{ \begin{array}{ll} \Phi \left( z_\alpha + \sqrt {N} \frac{p - p_0}{\sqrt {p(1-p)}} \right), & \mbox{upper one-sided} \\ \Phi \left( z_\alpha - \sqrt {N} \frac{p - p_0}{\sqrt {p(1-p)}} \right), & \mbox{lower one-sided} \\ \Phi \left( z_\frac {\alpha }{2} + \sqrt {N} \frac{p - p_0}{\sqrt {p(1-p)}} \right) + \Phi \left( z_\frac {\alpha }{2} - \sqrt {N} \frac{p - p_0}{\sqrt {p(1-p)}} \right), & \mbox{two-sided} \\ \end{array} \right. \\ \end{align*}

The approximate sample size is computed in closed form for the one-sided cases by inverting the power equation,

\[ N = p(1-p)\left(\frac{z_\mr {power} + z_{1-\alpha }}{p-p_0} \right)^2 \]

and by numerical inversion for the two-sided case.

z Test for Binomial Proportion with Continuity Adjustment Using Null Variance (TEST=ADJZ VAREST=NULL)

For the normal approximation test with continuity adjustment, the test statistic is (Pagano and Gauvreau 1993, p. 295):

\[ Z_ c(X) = \frac{X - N p_0 + 0.5(1_{\{ X < N p_0\} }) - 0.5(1_{\{ X > N p_0\} }) }{\left[ N p_0(1-p_0) \right]^\frac {1}{2}} \]

For the METHOD=EXACT option, the computations are the same as described in the section Exact Test of a Binomial Proportion (TEST=EXACT) except for the definitions of the critical values.

For the upper one-sided case,

\begin{align*} C_ U & = \min \{ C: Z_ c(C) \ge z_{1-\alpha }\} \\ \end{align*}

For the lower one-sided case,

\begin{align*} C_ L & = \max \{ C: Z_ c(C) \le z_\alpha \} \\ \end{align*}

For the two-sided case,

\begin{align*} C_ L & = \max \{ C: Z_ c(C) \le z_\frac {\alpha }{2}\} \\ C_ U & = \min \{ C: Z_ c(C) \ge z_{1-\frac{\alpha }{2}}\} \\ \end{align*}

For the METHOD=NORMAL option, the test statistic $Z_ c(X)$ is assumed to have the normal distribution $N(\mu , \sigma ^2)$, where $\mu $ and $\sigma ^2$ are derived as follows.

For convenience of notation, define

\[ k = \frac{1}{2 \sqrt {N p_0 (1-p_0)}} \]

Then

\[ E \left[Z_ c(X)\right] = 2 k N p - 2 k N p_0 + k P(X < N p_0) - k P(X > N p_0) \]

and

\begin{align*} \mr{Var} \left[Z_ c(X)\right] & = 4 k^2 N p (1-p) + k^2 \left[ 1 - P(X = N p_0) \right] - k^2 \left[ P(X<Np_0) - P(X>Np_0) \right]^2 \\ & \quad + 4 k^2 \left[ E\left(X 1_{\{ X<Np_0\} }\right) - E\left(X 1_{\{ X>Np_0\} }\right) \right] - 4 k^2 N p \left[P(X<Np_0) - P(X>Np_0)\right] \\ \end{align*}

The probabilities $P(X=Np_0)$, $P(X<Np_0)$, and $P(X>Np_0)$ and the truncated expectations $E\left(X 1_{\{ X<Np_0\} }\right)$ and $E\left(X 1_{\{ X>Np_0\} }\right)$ are approximated by assuming the normal-approximate distribution of X, $N(Np, Np(1-p))$. Letting $\phi (\cdot )$ and $\Phi (\cdot )$ denote the standard normal PDF and CDF, respectively, and defining d as

\[ d = \frac{N p_0 - N p}{\left[ N p (1-p) \right]^\frac {1}{2}} \]

the terms are computed as follows:

\begin{align*} P(X=Np_0) & = 0 \\ P(X<Np_0) & = \Phi (d) \\ P(X>Np_0) & = 1 - \Phi (d) \\ E\left(X 1_{\{ X<Np_0\} }\right) & = Np\Phi (d) - \left[ N p (1-p) \right]^\frac {1}{2} \phi (d) \\ E\left(X 1_{\{ X>Np_0\} }\right) & = Np\left[ 1 - \Phi (d) \right] + \left[ N p (1-p) \right]^\frac {1}{2} \phi (d) \\ \end{align*}

The mean and variance of $Z_ c(X)$ are thus approximated by

\[ \mu = k\left[ 2 N p - 2 N p_0 + 2 \Phi (d) - 1 \right] \]

and

\[ \sigma ^2 = 4k^2 \left[Np(1-p) + \Phi (d)\left( 1-\Phi (d) \right) - 2 \left( Np(1-p) \right)^\frac {1}{2} \phi (d) \right] \]

The approximate power is computed as

\begin{align*} \mr{power} & = \left\{ \begin{array}{ll} \Phi \left( \frac{z_\alpha + \mu }{\sigma } \right), & \mbox{upper one-sided} \\ \Phi \left( \frac{z_\alpha - \mu }{\sigma } \right), & \mbox{lower one-sided} \\ \Phi \left( \frac{z_\frac {\alpha }{2} + \mu }{\sigma } \right) + \Phi \left( \frac{z_\frac {\alpha }{2} - \mu }{\sigma } \right), & \mbox{two-sided} \\ \end{array} \right. \\ \end{align*}

The approximate sample size is computed by numerical inversion.

z Test for Binomial Proportion with Continuity Adjustment Using Sample Variance (TEST=ADJZ VAREST=SAMPLE)

For the normal approximation test with continuity adjustment using the sample variance, the test statistic is

\[ Z_{cs}(X) = \frac{X - N p_0 + 0.5(1_{\{ X < N p_0\} }) - 0.5(1_{\{ X > N p_0\} }) }{\left[ N \hat{p}(1-\hat{p}) \right]^\frac {1}{2}} \]

where $\hat{p} = X/N$.

For the METHOD=EXACT option, the computations are the same as described in the section Exact Test of a Binomial Proportion (TEST=EXACT) except for the definitions of the critical values.

For the upper one-sided case,

\begin{align*} C_ U & = \min \{ C: Z_{cs}(C) \ge z_{1-\alpha }\} \\ \end{align*}

For the lower one-sided case,

\begin{align*} C_ L & = \max \{ C: Z_{cs}(C) \le z_\alpha \} \\ \end{align*}

For the two-sided case,

\begin{align*} C_ L & = \max \{ C: Z_{cs}(C) \le z_\frac {\alpha }{2}\} \\ C_ U & = \min \{ C: Z_{cs}(C) \ge z_{1-\frac{\alpha }{2}}\} \\ \end{align*}

For the METHOD=NORMAL option, the test statistic $Z_{cs}(X)$ is assumed to have the normal distribution $N(\mu , \sigma ^2)$, where $\mu $ and $\sigma ^2$ are derived as follows.

For convenience of notation, define

\[ k = \frac{1}{2 \sqrt {N p (1-p)}} \]

Then

\[ E \left[Z_{cs}(X)\right] \approx 2 k N p - 2 k N p_0 + k P(X < N p_0) - k P(X > N p_0) \]

and

\begin{align*} \mr{Var} \left[Z_{cs}(X)\right] & \approx 4 k^2 N p (1-p) + k^2 \left[ 1 - P(X = N p_0) \right] - k^2 \left[ P(X<Np_0) - P(X>Np_0) \right]^2 \\ & \quad + 4 k^2 \left[ E\left(X 1_{\{ X<Np_0\} }\right) - E\left(X 1_{\{ X>Np_0\} }\right) \right] - 4 k^2 N p \left[P(X<Np_0) - P(X>Np_0)\right] \\ \end{align*}

The probabilities $P(X=Np_0)$, $P(X<Np_0)$, and $P(X>Np_0)$ and the truncated expectations $E\left(X 1_{\{ X<Np_0\} }\right)$ and $E\left(X 1_{\{ X>Np_0\} }\right)$ are approximated by assuming the normal-approximate distribution of X, $N(Np, Np(1-p))$. Letting $\phi (\cdot )$ and $\Phi (\cdot )$ denote the standard normal PDF and CDF, respectively, and defining d as

\[ d = \frac{N p_0 - N p}{\left[ N p (1-p) \right]^\frac {1}{2}} \]

the terms are computed as follows:

\begin{align*} P(X=Np_0) & = 0 \\ P(X<Np_0) & = \Phi (d) \\ P(X>Np_0) & = 1 - \Phi (d) \\ E\left(X 1_{\{ X<Np_0\} }\right) & = Np\Phi (d) - \left[ N p (1-p) \right]^\frac {1}{2} \phi (d) \\ E\left(X 1_{\{ X>Np_0\} }\right) & = Np\left[ 1 - \Phi (d) \right] + \left[ N p (1-p) \right]^\frac {1}{2} \phi (d) \\ \end{align*}

The mean and variance of $Z_{cs}(X)$ are thus approximated by

\[ \mu = k\left[ 2 N p - 2 N p_0 + 2 \Phi (d) - 1 \right] \]

and

\[ \sigma ^2 = 4k^2 \left[Np(1-p) + \Phi (d)\left( 1-\Phi (d) \right) - 2 \left( Np(1-p) \right)^\frac {1}{2} \phi (d) \right] \]

The approximate power is computed as

\begin{align*} \mr{power} & = \left\{ \begin{array}{ll} \Phi \left( \frac{z_\alpha + \mu }{\sigma } \right), & \mbox{upper one-sided} \\ \Phi \left( \frac{z_\alpha - \mu }{\sigma } \right), & \mbox{lower one-sided} \\ \Phi \left( \frac{z_\frac {\alpha }{2} + \mu }{\sigma } \right) + \Phi \left( \frac{z_\frac {\alpha }{2} - \mu }{\sigma } \right), & \mbox{two-sided} \\ \end{array} \right. \\ \end{align*}

The approximate sample size is computed by numerical inversion.

Exact Equivalence Test of a Binomial Proportion (TEST=EQUIV_EXACT)

The hypotheses for the equivalence test are

\begin{align*} H_{0}\colon & p < \theta _ L \quad \mbox{or}\quad p > \theta _ U\\ H_{1}\colon & \theta _ L \le p \le \theta _ U \end{align*}

where $\theta _ L$ and $\theta _ U$ are the lower and upper equivalence bounds, respectively.

The analysis is the two one-sided tests (TOST) procedure as described in Chow, Shao, and Wang (2003) on p. 84, but using exact critical values as on p. 116 instead of normal-based critical values.

Two different hypothesis tests are carried out:

\begin{align*} H_{a0}\colon & p < \theta _ L \\ H_{a1}\colon & p \ge \theta _ L \end{align*}

and

\begin{align*} H_{b0}\colon & p > \theta _ U \\ H_{b1}\colon & p \le \theta _ U \end{align*}

If $H_{a0}$ is rejected in favor of $H_{a1}$ and $H_{b0}$ is rejected in favor of $H_{b1}$, then $H_{0}$ is rejected in favor of $H_{1}$.

The test statistic for each of the two tests ($H_{a0}$ versus $H_{a1}$ and $H_{b0}$ versus $H_{b1}$) is

\[ X = \mbox{number of successes} \thicksim \mr{Bin}(N, p) \]

Let $C_ U$ denote the critical value of the exact upper one-sided test of $H_{a0}$ versus $H_{a1}$, and let $C_ L$ denote the critical value of the exact lower one-sided test of $H_{b0}$ versus $H_{b1}$. These critical values are computed in the section Exact Test of a Binomial Proportion (TEST=EXACT). Both of these tests are rejected if and only if $C_ U \le X \le C_ L$. Thus, the exact power of the equivalence test is

\begin{align*} \mr{power} & = P\left( C_ U \le X \le C_ L \right) \\ & = P\left( X \ge C_ U \right) - P \left( X \ge C_ L + 1 \right) \end{align*}

The probabilities are computed using Johnson and Kotz (1970, equation 3.20).

z Equivalence Test for Binomial Proportion Using Null Variance (TEST=EQUIV_Z VAREST=NULL)

The hypotheses for the equivalence test are

\begin{align*} H_{0}\colon & p < \theta _ L \quad \mbox{or}\quad p > \theta _ U\\ H_{1}\colon & \theta _ L \le p \le \theta _ U \end{align*}

where $\theta _ L$ and $\theta _ U$ are the lower and upper equivalence bounds, respectively.

The analysis is the two one-sided tests (TOST) procedure as described in Chow, Shao, and Wang (2003) on p. 84, but using the null variance instead of the sample variance.

Two different hypothesis tests are carried out:

\begin{align*} H_{a0}\colon & p < \theta _ L \\ H_{a1}\colon & p \ge \theta _ L \end{align*}

and

\begin{align*} H_{b0}\colon & p > \theta _ U \\ H_{b1}\colon & p \le \theta _ U \end{align*}

If $H_{a0}$ is rejected in favor of $H_{a1}$ and $H_{b0}$ is rejected in favor of $H_{b1}$, then $H_{0}$ is rejected in favor of $H_{1}$.

The test statistic for the test of $H_{a0}$ versus $H_{a1}$ is

\[ Z_{L}(X) = \frac{X - N \theta _ L}{\left[ N \theta _ L(1-\theta _ L) \right]^\frac {1}{2}} \]

The test statistic for the test of $H_{b0}$ versus $H_{b1}$ is

\[ Z_{U}(X) = \frac{X - N \theta _ U}{\left[ N \theta _ U(1-\theta _ U) \right]^\frac {1}{2}} \]

For the METHOD=EXACT option, let $C_ U$ denote the critical value of the exact upper one-sided test of $H_{a0}$ versus $H_{a1}$ using $Z_{L}(X)$. This critical value is computed in the section z Test for Binomial Proportion Using Null Variance (TEST=Z VAREST=NULL). Similarly, let $C_ L$ denote the critical value of the exact lower one-sided test of $H_{b0}$ versus $H_{b1}$ using $Z_{U}(X)$. Both of these tests are rejected if and only if $C_ U \le X \le C_ L$. Thus, the exact power of the equivalence test is

\begin{align*} \mr{power} & = P\left( C_ U \le X \le C_ L \right) \\ & = P\left( X \ge C_ U \right) - P \left( X \ge C_ L + 1 \right) \end{align*}

The probabilities are computed using Johnson and Kotz (1970, equation 3.20).

For the METHOD=NORMAL option, the test statistic $Z_{L}(X)$ is assumed to have the normal distribution

\[ \mr{N}\left(\frac{N^{\frac{1}{2}}(p - \theta _ L)}{\left[ \theta _ L(1-\theta _ L) \right]^\frac {1}{2}}, \frac{p(1-p)}{\theta _ L(1-\theta _ L)}\right) \]

and the test statistic $Z_{U}(X)$ is assumed to have the normal distribution

\[ \mr{N}\left(\frac{N^{\frac{1}{2}}(p - \theta _ U)}{\left[ \theta _ U(1-\theta _ U) \right]^\frac {1}{2}}, \frac{p(1-p)}{\theta _ U(1-\theta _ U)}\right) \]

(see Chow, Shao, and Wang (2003, p. 84)). The approximate power is computed as

\begin{align*} \mr{power} & = \Phi \left( \frac{z_\alpha - \sqrt {N} \frac{p - \theta _ U}{\sqrt {\theta _ U(1-\theta _ U)}}}{\sqrt {\frac{p(1-p)}{\theta _ U(1-\theta _ U)}}} \right) + \Phi \left( \frac{z_\alpha + \sqrt {N} \frac{p - \theta _ L}{\sqrt {\theta _ L(1-\theta _ L)}}}{\sqrt {\frac{p(1-p)}{\theta _ L(1-\theta _ L)}}} \right) - 1 \end{align*}

The approximate sample size is computed by numerically inverting the power formula, using the sample size estimate $N_0$ of Chow, Shao, and Wang (2003, p. 85) as an initial guess:

\[ N_0 = p(1-p)\left(\frac{z_{1-\alpha } + z_{(1+\mr{power})/2}}{0.5(\theta _ U-\theta _ L) - \mid p - 0.5(\theta _ L+\theta _ U) \mid } \right)^2 \]
z Equivalence Test for Binomial Proportion Using Sample Variance (TEST=EQUIV_Z VAREST=SAMPLE)

The hypotheses for the equivalence test are

\begin{align*} H_{0}\colon & p < \theta _ L \quad \mbox{or}\quad p > \theta _ U\\ H_{1}\colon & \theta _ L \le p \le \theta _ U \end{align*}

where $\theta _ L$ and $\theta _ U$ are the lower and upper equivalence bounds, respectively.

The analysis is the two one-sided tests (TOST) procedure as described in Chow, Shao, and Wang (2003) on p. 84.

Two different hypothesis tests are carried out:

\begin{align*} H_{a0}\colon & p < \theta _ L \\ H_{a1}\colon & p \ge \theta _ L \end{align*}

and

\begin{align*} H_{b0}\colon & p > \theta _ U \\ H_{b1}\colon & p \le \theta _ U \end{align*}

If $H_{a0}$ is rejected in favor of $H_{a1}$ and $H_{b0}$ is rejected in favor of $H_{b1}$, then $H_{0}$ is rejected in favor of $H_{1}$.

The test statistic for the test of $H_{a0}$ versus $H_{a1}$ is

\[ Z_{sL}(X) = \frac{X - N \theta _ L}{\left[ N \hat{p}(1-\hat{p}) \right]^\frac {1}{2}} \]

where $\hat{p} = X/N$.

The test statistic for the test of $H_{b0}$ versus $H_{b1}$ is

\[ Z_{sU}(X) = \frac{X - N \theta _ U}{\left[ N \hat{p}(1-\hat{p}) \right]^\frac {1}{2}} \]

For the METHOD=EXACT option, let $C_ U$ denote the critical value of the exact upper one-sided test of $H_{a0}$ versus $H_{a1}$ using $Z_{sL}(X)$. This critical value is computed in the section z Test for Binomial Proportion Using Sample Variance (TEST=Z VAREST=SAMPLE). Similarly, let $C_ L$ denote the critical value of the exact lower one-sided test of $H_{b0}$ versus $H_{b1}$ using $Z_{sU}(X)$. Both of these tests are rejected if and only if $C_ U \le X \le C_ L$. Thus, the exact power of the equivalence test is

\begin{align*} \mr{power} & = P\left( C_ U \le X \le C_ L \right) \\ & = P\left( X \ge C_ U \right) - P \left( X \ge C_ L + 1 \right) \end{align*}

The probabilities are computed using Johnson and Kotz (1970, equation 3.20).

For the METHOD=NORMAL option, the test statistic $Z_{sL}(X)$ is assumed to have the normal distribution

\[ \mr{N}\left(\frac{N^{\frac{1}{2}}(p - \theta _ L)}{\left[ p(1-p) \right]^\frac {1}{2}}, 1 \right) \]

and the test statistic $Z_{sU}(X)$ is assumed to have the normal distribution

\[ \mr{N}\left(\frac{N^{\frac{1}{2}}(p - \theta _ U)}{\left[ p(1-p) \right]^\frac {1}{2}}, 1 \right) \]

(see Chow, Shao, and Wang (2003), p. 84).

The approximate power is computed as

\begin{align*} \mr{power} & = \Phi \left( z_\alpha - \sqrt {N} \frac{p - \theta _ U}{\sqrt {p(1-p)}} \right) + \Phi \left( z_\alpha + \sqrt {N} \frac{p - \theta _ L}{\sqrt {p(1-p)}} \right) - 1 \end{align*}

The approximate sample size is computed by numerically inverting the power formula, using the sample size estimate $N_0$ of Chow, Shao, and Wang (2003, p. 85) as an initial guess:

\[ N_0 = p(1-p)\left(\frac{z_{1-\alpha } + z_{(1+\mr{power})/2}}{0.5(\theta _ U-\theta _ L) - \mid p - 0.5(\theta _ L+\theta _ U) \mid } \right)^2 \]
z Equivalence Test for Binomial Proportion with Continuity Adjustment Using Null Variance (TEST=EQUIV_ADJZ VAREST=NULL)

The hypotheses for the equivalence test are

\begin{align*} H_{0}\colon & p < \theta _ L \quad \mbox{or}\quad p > \theta _ U\\ H_{1}\colon & \theta _ L \le p \le \theta _ U \end{align*}

where $\theta _ L$ and $\theta _ U$ are the lower and upper equivalence bounds, respectively.

The analysis is the two one-sided tests (TOST) procedure as described in Chow, Shao, and Wang (2003) on p. 84, but using the null variance instead of the sample variance.

Two different hypothesis tests are carried out:

\begin{align*} H_{a0}\colon & p < \theta _ L \\ H_{a1}\colon & p \ge \theta _ L \end{align*}

and

\begin{align*} H_{b0}\colon & p > \theta _ U \\ H_{b1}\colon & p \le \theta _ U \end{align*}

If $H_{a0}$ is rejected in favor of $H_{a1}$ and $H_{b0}$ is rejected in favor of $H_{b1}$, then $H_{0}$ is rejected in favor of $H_{1}$.

The test statistic for the test of $H_{a0}$ versus $H_{a1}$ is

\[ Z_{cL}(X) = \frac{X - N \theta _ L + 0.5(1_{\{ X < N \theta _ L\} }) - 0.5(1_{\{ X > N \theta _ L\} }) }{\left[ N \hat{\theta _ L}(1-\hat{\theta _ L}) \right]^\frac {1}{2}} \]

where $\hat{p} = X/N$.

The test statistic for the test of $H_{b0}$ versus $H_{b1}$ is

\[ Z_{cU}(X) = \frac{X - N \theta _ U + 0.5(1_{\{ X < N \theta _ U\} }) - 0.5(1_{\{ X > N \theta _ U\} }) }{\left[ N \hat{\theta _ U}(1-\hat{\theta _ U}) \right]^\frac {1}{2}} \]

For the METHOD=EXACT option, let $C_ U$ denote the critical value of the exact upper one-sided test of $H_{a0}$ versus $H_{a1}$ using $Z_{cL}(X)$. This critical value is computed in the section z Test for Binomial Proportion with Continuity Adjustment Using Null Variance (TEST=ADJZ VAREST=NULL). Similarly, let $C_ L$ denote the critical value of the exact lower one-sided test of $H_{b0}$ versus $H_{b1}$ using $Z_{cU}(X)$. Both of these tests are rejected if and only if $C_ U \le X \le C_ L$. Thus, the exact power of the equivalence test is

\begin{align*} \mr{power} & = P\left( C_ U \le X \le C_ L \right) \\ & = P\left( X \ge C_ U \right) - P \left( X \ge C_ L + 1 \right) \end{align*}

The probabilities are computed using Johnson and Kotz (1970, equation 3.20).

For the METHOD=NORMAL option, the test statistic $Z_{cL}(X)$ is assumed to have the normal distribution $N(\mu _ L, \sigma _ L^2)$, and $Z_{cU}(X)$ is assumed to have the normal distribution $N(\mu _ U, \sigma _ U^2)$, where $\mu _ L$, $\mu _ U$, $\sigma _ L^2$, and $\sigma _ U^2$ are derived as follows.

For convenience of notation, define

\begin{align*} k_ L & = \frac{1}{2 \sqrt {N \theta _ L (1-\theta _ L)}} \\ k_ U & = \frac{1}{2 \sqrt {N \theta _ U (1-\theta _ U)}} \\ \end{align*}

Then

\begin{align*} E \left[Z_{cL}(X)\right] & \approx 2 k_ L N p - 2 k_ L N \theta _ L + k_ L P(X < N \theta _ L) - k_ L P(X > N \theta _ L) \\ E \left[Z_{cU}(X)\right] & \approx 2 k_ U N p - 2 k_ U N \theta _ U + k_ U P(X < N \theta _ U) - k_ U P(X > N \theta _ U) \end{align*}

and

\begin{align*} \mr{Var} \left[Z_{cL}(X)\right] & \approx 4 k_ L^2 N p (1-p) + k_ L^2 \left[ 1 - P(X = N \theta _ L) \right] - k_ L^2 \left[ P(X<N\theta _ L) - P(X>N\theta _ L) \right]^2 \\ & \quad + 4 k_ L^2 \left[ E\left(X 1_{\{ X<N\theta _ L\} }\right) - E\left(X 1_{\{ X>N\theta _ L\} }\right) \right] - 4 k_ L^2 N p \left[P(X<N\theta _ L) - P(X>N\theta _ L)\right] \\ \mr{Var} \left[Z_{cU}(X)\right] & \approx 4 k_ U^2 N p (1-p) + k_ U^2 \left[ 1 - P(X = N \theta _ U) \right] - k_ U^2 \left[ P(X<N\theta _ U) - P(X>N\theta _ U) \right]^2 \\ & \quad + 4 k_ U^2 \left[ E\left(X 1_{\{ X<N\theta _ U\} }\right) - E\left(X 1_{\{ X>N\theta _ U\} }\right) \right] - 4 k_ U^2 N p \left[P(X<N\theta _ U) - P(X>N\theta _ U)\right] \\ \end{align*}

The probabilities $P(X=N\theta _ L)$, $P(X<N\theta _ L)$, $P(X>N\theta _ L)$, $P(X=N\theta _ U)$, $P(X<N\theta _ U)$, and $P(X>N\theta _ U)$ and the truncated expectations $E\left(X 1_{\{ X<N\theta _ L\} }\right)$, $E\left(X 1_{\{ X>N\theta _ L\} }\right)$, $E\left(X 1_{\{ X<N\theta _ L\} }\right)$, and $E\left(X 1_{\{ X>N\theta _ L\} }\right)$ are approximated by assuming the normal-approximate distribution of X, $N(Np, Np(1-p))$. Letting $\phi (\cdot )$ and $\Phi (\cdot )$ denote the standard normal PDF and CDF, respectively, and defining $d_ L$ and $d_ U$ as

\begin{align*} d_ L = \frac{N \theta _ L - N p}{\left[ N p (1-p) \right]^\frac {1}{2}} \\ d_ U = \frac{N \theta _ U - N p}{\left[ N p (1-p) \right]^\frac {1}{2}} \end{align*}

the terms are computed as follows:

\begin{align*} P(X=N\theta _ L) & = 0 \\ P(X=N\theta _ U) & = 0 \\ P(X<N\theta _ L) & = \Phi (d_ L) \\ P(X<N\theta _ U) & = \Phi (d_ U) \\ P(X>N\theta _ L) & = 1 - \Phi (d_ L) \\ P(X>N\theta _ U) & = 1 - \Phi (d_ U) \\ E\left(X 1_{\{ X<N\theta _ L\} }\right) & = Np\Phi (d_ L) - \left[ N p (1-p) \right]^\frac {1}{2} \phi (d_ L) \\ E\left(X 1_{\{ X<N\theta _ U\} }\right) & = Np\Phi (d_ U) - \left[ N p (1-p) \right]^\frac {1}{2} \phi (d_ U) \\ E\left(X 1_{\{ X>N\theta _ L\} }\right) & = Np\left[ 1 - \Phi (d_ L) \right] + \left[ N p (1-p) \right]^\frac {1}{2} \phi (d_ L) \\ E\left(X 1_{\{ X>N\theta _ U\} }\right) & = Np\left[ 1 - \Phi (d_ U) \right] + \left[ N p (1-p) \right]^\frac {1}{2} \phi (d_ U) \\ \end{align*}

The mean and variance of $Z_{cL}(X)$ and $Z_{cU}(X)$ are thus approximated by

\begin{align*} \mu _ L & = k_ L\left[ 2 N p - 2 N \theta _ L + 2 \Phi (d_ L) - 1 \right] \\ \mu _ U & = k_ U\left[ 2 N p - 2 N \theta _ U + 2 \Phi (d_ U) - 1 \right] \\ \end{align*}

and

\begin{align*} \sigma _ L^2 & = 4k_ L^2 \left[Np(1-p) + \Phi (d_ L)\left( 1-\Phi (d_ L) \right) - 2 \left( Np(1-p) \right)^\frac {1}{2} \phi (d_ L) \right] \\ \sigma _ U^2 & = 4k_ U^2 \left[Np(1-p) + \Phi (d_ U)\left( 1-\Phi (d_ U) \right) - 2 \left( Np(1-p) \right)^\frac {1}{2} \phi (d_ U) \right] \\ \end{align*}

The approximate power is computed as

\begin{align*} \mr{power} & = \Phi \left( \frac{z_\alpha - \mu _ U}{\sigma _ U} \right) + \Phi \left( \frac{z_\alpha + \mu _ L}{\sigma _ L} \right) - 1 \end{align*}

The approximate sample size is computed by numerically inverting the power formula.

z Equivalence Test for Binomial Proportion with Continuity Adjustment Using Sample Variance (TEST=EQUIV_ADJZ VAREST=SAMPLE)

The hypotheses for the equivalence test are

\begin{align*} H_{0}\colon & p < \theta _ L \quad \mbox{or}\quad p > \theta _ U\\ H_{1}\colon & \theta _ L \le p \le \theta _ U \end{align*}

where $\theta _ L$ and $\theta _ U$ are the lower and upper equivalence bounds, respectively.

The analysis is the two one-sided tests (TOST) procedure as described in Chow, Shao, and Wang (2003) on p. 84.

Two different hypothesis tests are carried out:

\begin{align*} H_{a0}\colon & p < \theta _ L \\ H_{a1}\colon & p \ge \theta _ L \end{align*}

and

\begin{align*} H_{b0}\colon & p > \theta _ U \\ H_{b1}\colon & p \le \theta _ U \end{align*}

If $H_{a0}$ is rejected in favor of $H_{a1}$ and $H_{b0}$ is rejected in favor of $H_{b1}$, then $H_{0}$ is rejected in favor of $H_{1}$.

The test statistic for the test of $H_{a0}$ versus $H_{a1}$ is

\[ Z_{csL}(X) = \frac{X - N \theta _ L + 0.5(1_{\{ X < N \theta _ L\} }) - 0.5(1_{\{ X > N \theta _ L\} }) }{\left[ N \hat{p}(1-\hat{p}) \right]^\frac {1}{2}} \]

where $\hat{p} = X/N$.

The test statistic for the test of $H_{b0}$ versus $H_{b1}$ is

\[ Z_{csU}(X) = \frac{X - N \theta _ U + 0.5(1_{\{ X < N \theta _ U\} }) - 0.5(1_{\{ X > N \theta _ U\} }) }{\left[ N \hat{p}(1-\hat{p}) \right]^\frac {1}{2}} \]

For the METHOD=EXACT option, let $C_ U$ denote the critical value of the exact upper one-sided test of $H_{a0}$ versus $H_{a1}$ using $Z_{csL}(X)$. This critical value is computed in the section z Test for Binomial Proportion with Continuity Adjustment Using Sample Variance (TEST=ADJZ VAREST=SAMPLE). Similarly, let $C_ L$ denote the critical value of the exact lower one-sided test of $H_{b0}$ versus $H_{b1}$ using $Z_{csU}(X)$. Both of these tests are rejected if and only if $C_ U \le X \le C_ L$. Thus, the exact power of the equivalence test is

\begin{align*} \mr{power} & = P\left( C_ U \le X \le C_ L \right) \\ & = P\left( X \ge C_ U \right) - P \left( X \ge C_ L + 1 \right) \end{align*}

The probabilities are computed using Johnson and Kotz (1970, equation 3.20).

For the METHOD=NORMAL option, the test statistic $Z_{csL}(X)$ is assumed to have the normal distribution $N(\mu _ L, \sigma _ L^2)$, and $Z_{csU}(X)$ is assumed to have the normal distribution $N(\mu _ U, \sigma _ U^2)$, where $\mu _ L$, $\mu _ U$, $\sigma _ L^2$ and $\sigma _ U^2$ are derived as follows.

For convenience of notation, define

\[ k = \frac{1}{2 \sqrt {N p (1-p)}} \]

Then

\begin{align*} E \left[Z_{csL}(X)\right] & \approx 2 k N p - 2 k N \theta _ L + k P(X < N \theta _ L) - k P(X > N \theta _ L) \\ E \left[Z_{csU}(X)\right] & \approx 2 k N p - 2 k N \theta _ U + k P(X < N \theta _ U) - k P(X > N \theta _ U) \end{align*}

and

\begin{align*} \mr{Var} \left[Z_{csL}(X)\right] & \approx 4 k^2 N p (1-p) + k^2 \left[ 1 - P(X = N \theta _ L) \right] - k^2 \left[ P(X<N\theta _ L) - P(X>N\theta _ L) \right]^2 \\ & \quad + 4 k^2 \left[ E\left(X 1_{\{ X<N\theta _ L\} }\right) - E\left(X 1_{\{ X>N\theta _ L\} }\right) \right] - 4 k^2 N p \left[P(X<N\theta _ L) - P(X>N\theta _ L)\right] \\ \mr{Var} \left[Z_{csU}(X)\right] & \approx 4 k^2 N p (1-p) + k^2 \left[ 1 - P(X = N \theta _ U) \right] - k^2 \left[ P(X<N\theta _ U) - P(X>N\theta _ U) \right]^2 \\ & \quad + 4 k^2 \left[ E\left(X 1_{\{ X<N\theta _ U\} }\right) - E\left(X 1_{\{ X>N\theta _ U\} }\right) \right] - 4 k^2 N p \left[P(X<N\theta _ U) - P(X>N\theta _ U)\right] \\ \end{align*}

The probabilities $P(X=N\theta _ L)$, $P(X<N\theta _ L)$, $P(X>N\theta _ L)$, $P(X=N\theta _ U)$, $P(X<N\theta _ U)$, and $P(X>N\theta _ U)$ and the truncated expectations $E\left(X 1_{\{ X<N\theta _ L\} }\right)$, $E\left(X 1_{\{ X>N\theta _ L\} }\right)$, $E\left(X 1_{\{ X<N\theta _ L\} }\right)$, and $E\left(X 1_{\{ X>N\theta _ L\} }\right)$ are approximated by assuming the normal-approximate distribution of X, $N(Np, Np(1-p))$. Letting $\phi (\cdot )$ and $\Phi (\cdot )$ denote the standard normal PDF and CDF, respectively, and defining $d_ L$ and $d_ U$ as

\begin{align*} d_ L = \frac{N \theta _ L - N p}{\left[ N p (1-p) \right]^\frac {1}{2}} \\ d_ U = \frac{N \theta _ U - N p}{\left[ N p (1-p) \right]^\frac {1}{2}} \end{align*}

the terms are computed as follows:

\begin{align*} P(X=N\theta _ L) & = 0 \\ P(X=N\theta _ U) & = 0 \\ P(X<N\theta _ L) & = \Phi (d_ L) \\ P(X<N\theta _ U) & = \Phi (d_ U) \\ P(X>N\theta _ L) & = 1 - \Phi (d_ L) \\ P(X>N\theta _ U) & = 1 - \Phi (d_ U) \\ E\left(X 1_{\{ X<N\theta _ L\} }\right) & = Np\Phi (d_ L) - \left[ N p (1-p) \right]^\frac {1}{2} \phi (d_ L) \\ E\left(X 1_{\{ X<N\theta _ U\} }\right) & = Np\Phi (d_ U) - \left[ N p (1-p) \right]^\frac {1}{2} \phi (d_ U) \\ E\left(X 1_{\{ X>N\theta _ L\} }\right) & = Np\left[ 1 - \Phi (d_ L) \right] + \left[ N p (1-p) \right]^\frac {1}{2} \phi (d_ L) \\ E\left(X 1_{\{ X>N\theta _ U\} }\right) & = Np\left[ 1 - \Phi (d_ U) \right] + \left[ N p (1-p) \right]^\frac {1}{2} \phi (d_ U) \\ \end{align*}

The mean and variance of $Z_{csL}(X)$ and $Z_{csU}(X)$ are thus approximated by

\begin{align*} \mu _ L & = k\left[ 2 N p - 2 N \theta _ L + 2 \Phi (d_ L) - 1 \right] \\ \mu _ U & = k\left[ 2 N p - 2 N \theta _ U + 2 \Phi (d_ U) - 1 \right] \\ \end{align*}

and

\begin{align*} \sigma _ L^2 & = 4k^2 \left[Np(1-p) + \Phi (d_ L)\left( 1-\Phi (d_ L) \right) - 2 \left( Np(1-p) \right)^\frac {1}{2} \phi (d_ L) \right] \\ \sigma _ U^2 & = 4k^2 \left[Np(1-p) + \Phi (d_ U)\left( 1-\Phi (d_ U) \right) - 2 \left( Np(1-p) \right)^\frac {1}{2} \phi (d_ U) \right] \\ \end{align*}

The approximate power is computed as

\begin{align*} \mr{power} & = \Phi \left( \frac{z_\alpha - \mu _ U}{\sigma _ U} \right) + \Phi \left( \frac{z_\alpha + \mu _ L}{\sigma _ L} \right) - 1 \end{align*}

The approximate sample size is computed by numerically inverting the power formula.

Wilson Score Confidence Interval for Binomial Proportion (CI=WILSON)

The two-sided $100(1-\alpha )$% confidence interval for p is

\[ \frac{X + \frac{z^2_{1-\alpha /2}}{2}}{N + z^2_{1-\alpha /2}} \quad \pm \quad \frac{z_{1-\alpha /2} N^\frac {1}{2}}{N + z^2_{1-\alpha /2}} \left(\hat{p}(1-\hat{p}) + \frac{z^2_{1-\alpha /2}}{4N} \right)^\frac {1}{2} \]

So the half-width for the two-sided $100(1-\alpha )$% confidence interval is

\[ \mbox{half-width} = \frac{z_{1-\alpha /2} N^\frac {1}{2}}{N + z^2_{1-\alpha /2}} \left(\hat{p}(1-\hat{p}) + \frac{z^2_{1-\alpha /2}}{4N} \right)^\frac {1}{2} \]

Prob(Width) is calculated exactly by adding up the probabilities of observing each $X \in \{ 1, \ldots , N\} $ that produces a confidence interval whose half-width is at most a target value h:

\[ \mr{Prob(Width)} = \sum _{i=0}^ N P(X=i) 1_{\mbox{half-width} < h} \]

For references and more details about this and all other confidence intervals associated with the CI= option, see Binomial Proportion in ChapterĀ 40: The FREQ Procedure.

Agresti-Coull "Add k Successes and Failures" Confidence Interval for Binomial Proportion (CI=AGRESTICOULL)

The two-sided $100(1-\alpha )$% confidence interval for p is

\[ \frac{X + \frac{z^2_{1-\alpha /2}}{2}}{N + z^2_{1-\alpha /2}} \quad \pm \quad z_{1-\alpha /2} \left( \frac{\frac{X + \frac{z^2_{1-\alpha /2}}{2}}{N + z^2_{1-\alpha /2}} \left(1 -\frac{X + \frac{z^2_{1-\alpha /2}}{2}}{N + z^2_{1-\alpha /2}} \right)}{N + z^2_{1-\alpha /2}} \right)^\frac {1}{2} \]

So the half-width for the two-sided $100(1-\alpha )$% confidence interval is

\[ \mbox{half-width} = z_{1-\alpha /2} \left( \frac{\frac{X + \frac{z^2_{1-\alpha /2}}{2}}{N + z^2_{1-\alpha /2}} \left(1 -\frac{X + \frac{z^2_{1-\alpha /2}}{2}}{N + z^2_{1-\alpha /2}} \right)}{N + z^2_{1-\alpha /2}} \right)^\frac {1}{2} \]

Prob(Width) is calculated exactly by adding up the probabilities of observing each $X \in \{ 1, \ldots , N\} $ that produces a confidence interval whose half-width is at most a target value h:

\[ \mr{Prob(Width)} = \sum _{i=0}^ N P(X=i) 1_{\mbox{half-width} < h} \]
Jeffreys Confidence Interval for Binomial Proportion (CI=JEFFREYS)

The two-sided $100(1-\alpha )$% confidence interval for p is

\[ \left[ L_ J(X), U_ J(X) \right] \]

where

\[ L_ J(X) = \left\{ \begin{array}{ll} 0, & X = 0 \\ \mr{Beta}_{\alpha /2; X+1/2, N-X+1/2}, & X > 0 \\ \end{array} \right. \]

and

\[ U_ J(X) = \left\{ \begin{array}{ll} \mr{Beta}_{1-\alpha /2; X+1/2, N-X+1/2}, & X < N \\ 1, & X = N \\ \end{array} \right. \]

The half-width of this two-sided $100(1-\alpha )$% confidence interval is defined as half the width of the full interval:

\[ \mbox{half-width} = \frac{1}{2} \left( U_ J(X) - L_ J(X) \right) \]

Prob(Width) is calculated exactly by adding up the probabilities of observing each $X \in \{ 1, \ldots , N\} $ that produces a confidence interval whose half-width is at most a target value h:

\[ \mr{Prob(Width)} = \sum _{i=0}^ N P(X=i) 1_{\mbox{half-width} < h} \]
Exact Clopper-Pearson Confidence Interval for Binomial Proportion (CI=EXACT)

The two-sided $100(1-\alpha )$% confidence interval for p is

\[ \left[ L_ E(X), U_ E(X) \right] \]

where

\[ L_ E(X) = \left\{ \begin{array}{ll} 0, & X = 0 \\ \mr{Beta}_{\alpha /2; X, N-X+1}, & X > 0 \\ \end{array} \right. \]

and

\[ U_ E(X) = \left\{ \begin{array}{ll} \mr{Beta}_{1-\alpha /2; X+1, N-X}, & X < N \\ 1, & X = N \\ \end{array} \right. \]

The half-width of this two-sided $100(1-\alpha )$% confidence interval is defined as half the width of the full interval:

\[ \mbox{half-width} = \frac{1}{2} \left( U_ E(X) - L_ E(X) \right) \]

Prob(Width) is calculated exactly by adding up the probabilities of observing each $X \in \{ 1, \ldots , N\} $ that produces a confidence interval whose half-width is at most a target value h:

\[ \mr{Prob(Width)} = \sum _{i=0}^ N P(X=i) 1_{\mbox{half-width} < h} \]
Wald Confidence Interval for Binomial Proportion (CI=WALD)

The two-sided $100(1-\alpha )$% confidence interval for p is

\[ \hat{p} \quad \pm \quad z_{1-\alpha /2} \left( \frac{\hat{p}(1-\hat{p})}{N} \right)^\frac {1}{2} \]

So the half-width for the two-sided $100(1-\alpha )$% confidence interval is

\[ \mbox{half-width} = z_{1-\alpha /2} \left( \frac{\hat{p}(1-\hat{p})}{N} \right)^\frac {1}{2} \]

Prob(Width) is calculated exactly by adding up the probabilities of observing each $X \in \{ 1, \ldots , N\} $ that produces a confidence interval whose half-width is at most a target value h:

\[ \mr{Prob(Width)} = \sum _{i=0}^ N P(X=i) 1_{\mbox{half-width} < h} \]
Continuity-Corrected Wald Confidence Interval for Binomial Proportion (CI=WALD_CORRECT)

The two-sided $100(1-\alpha )$% confidence interval for p is

\[ \hat{p} \quad \pm \quad \left[ z_{1-\alpha /2} \left( \frac{\hat{p}(1-\hat{p})}{N} \right)^\frac {1}{2} + \frac{1}{2N} \right] \]

So the half-width for the two-sided $100(1-\alpha )$% confidence interval is

\[ \mbox{half-width} = z_{1-\alpha /2} \left( \frac{\hat{p}(1-\hat{p})}{N} \right)^\frac {1}{2} + \frac{1}{2N} \]

Prob(Width) is calculated exactly by adding up the probabilities of observing each $X \in \{ 1, \ldots , N\} $ that produces a confidence interval whose half-width is at most a target value h:

\[ \mr{Prob(Width)} = \sum _{i=0}^ N P(X=i) 1_{\mbox{half-width} < h} \]