The hypotheses for the onesample t test are




The test assumes normally distributed data and requires . The test statistics are




where is the sample mean, s is the sample standard deviation, and
The test is
Exact power computations for t tests are discussed in O’Brien and Muller (1993, Section 8.2), although not specifically for the onesample case. The power is based on the noncentral t and F distributions:


Solutions for N, , and are obtained by numerically inverting the power equation. Closedform solutions for other parameters, in terms of , are as follows:




The lognormal case is handled by reexpressing the analysis equivalently as a normalitybased test on the logtransformed data, by using properties of the lognormal distribution as discussed in Johnson, Kotz, and Balakrishnan (1994, Chapter 14). The approaches in the section OneSample t Test (TEST=T) then apply.
In contrast to the usual t test on normal data, the hypotheses with lognormal data are defined in terms of geometric means rather than arithmetic means. This is because the transformation of a null arithmetic mean of lognormal data to the normal scale depends on the unknown coefficient of variation, resulting in an illdefined hypothesis on the logtransformed data. Geometric means transform cleanly and are more natural for lognormal data.
The hypotheses for the onesample t test with lognormal data are




Let and be the (arithmetic) mean and standard deviation of the normal distribution of the logtransformed data. The hypotheses can be rewritten as follows:




where .
The test assumes lognormally distributed data and requires .
The power is
where




The hypotheses for the equivalence test are




The analysis is the two onesided tests (TOST) procedure of Schuirmann (1987). The test assumes normally distributed data and requires . Phillips (1990) derives an expression for the exact power assuming a twosample balanced design; the results are easily adapted to a onesample design:




where is Owen’s Q function, defined in the section Common Notation.
The lognormal case is handled by reexpressing the analysis equivalently as a normalitybased test on the logtransformed data, by using properties of the lognormal distribution as discussed in Johnson, Kotz, and Balakrishnan (1994, Chapter 14). The approaches in the section Equivalence Test for Mean of Normal Data (TEST=EQUIV DIST=NORMAL) then apply.
In contrast to the additive equivalence test on normal data, the hypotheses with lognormal data are defined in terms of geometric means rather than arithmetic means. This is because the transformation of an arithmetic mean of lognormal data to the normal scale depends on the unknown coefficient of variation, resulting in an illdefined hypothesis on the logtransformed data. Geometric means transform cleanly and are more natural for lognormal data.
The hypotheses for the equivalence test are




The analysis is the two onesided tests (TOST) procedure of Schuirmann (1987) on the logtransformed data. The test assumes lognormally distributed data and requires . Diletti, Hauschke, and Steinijans (1991) derive an expression for the exact power assuming a crossover design; the results are easily adapted to a onesample design:




where
is the standard deviation of the logtransformed data, and is Owen’s Q function, defined in the section Common Notation.
This analysis of precision applies to the standard tbased confidence interval:
where is the sample mean and s is the sample standard deviation. The “halfwidth” is defined as the distance from the point estimate to a finite endpoint,
A “valid” conference interval captures the true mean. The exact probability of obtaining at most the target confidence interval halfwidth h, unconditional or conditional on validity, is given by Beal (1989):




where




and is Owen’s Q function, defined in the section Common Notation.
A “quality” confidence interval is both sufficiently narrow (halfwidth ) and valid:



