The SPP Procedure

EDF Goodness-of-Fit Tests

You use goodness-of-fit tests to examine the fit of a parametric distribution. In the SPP procedure, this task emerges when you test your data for dependence on a covariate. You can examine the goodness of fit by using tests that are based on the EDF. These tests offer advantages over traditional chi-square goodness-of-fit tests, as discussed in D’Agostino and Stephens (1986). The empirical distribution function is defined for a set of n independent observations, $X_1,\ldots ,X_ n$, that have a common distribution function $F(x)$ as follows. Denote the observations ordered from smallest to largest as $X_{(1)},\ldots ,X_{(n)}$. Then the empirical distribution function, $F_ n(x)$, is

\[ F_ n(x) = \left\{ \begin{array}{llr} 0, & x < X_{(1)} & \\[0.02in] \frac{i}{n}, & X_{(i)} \leq x < X_{(i+1)} & i=1,\ldots ,n-1 \\[0.02in] 1, & X_{(n)} \leq x & \end{array} \right. \]

$F_ n(x)$ is a step function that takes a step of height $\frac{1}{n}$ at each observation. This function estimates the distribution function $F(x)$. At any value x, $F_ n(x)$ is the proportion of observations that are less than or equal to x, whereas $F(x)$ is the probability of an observation being less than or equal to x. EDF statistics measure the discrepancy between $F_ n(x)$ and $F(x)$.

The computational formulas for the EDF statistics make use of the probability integral transformation $Z=F(X)$. If $F(X)$ is the true distribution function of X, then the random variable Z is uniformly distributed between 0 and 1. For example, assume that you believe $X \sim N(\mu ,\sigma ^2)$. In this case, the probability integral transform $Z=F(X)$ for the normal $N(\mu ,\sigma ^2)$ is given by the EDF of the standardized value $(X-\mu )~ /~ \sigma $. To test the fit of your sample EDF $F_ n(x)$ to the assumed exact $F(X)$, you can equivalently test the fit of $F_ n(z)$ to the EDF $F(Z)$ of Z. As $Z \sim U(0,1)$, $F(Z)$ is the cumulative density function (CDF) of the standard uniform $U(0,1)$, which is simply $F(Z)=z$. This also means that your empirical $F_ n(x)=F_ n(z)$. Consequently, the probability integral transform translates the initial fit task into an easier comparison between $F_ n(z)$ and $F(Z)$.

There are two main classes of EDF statistics: the supremum and the quadratic class. The supremum class is based on the largest vertical difference between $F(x)$ and $F_ n(x)$. The quadratic class is based on the squared difference $(F_ n(x)- F(x))^2$. Quadratic statistics have the following general form:

\[ Q = n \int _{-\infty }^{+\infty } (F_ n(x)-F(x))^2 \psi (x) dF(x) \]

The function $\psi (x)$ weights the squared difference $(F_ n(x)-F(x))^2$.

As previously discussed, the SPP procedure considers the ordered observations $X_{(1)},\ldots ,X_{(n)}$ and computes the values $Z_{(i)}=F(X_{(i)})$ by applying the probability integral transform. PROC SPP examines the goodness of fit by computing the following two EDF statistics:

  • Kolmogorov-Smirnov two-sided D from the supremum class

  • Cramér-von Mises $W^2$ from the quadratic class

Within the different classes of EDF statistics, the quadratic class is known to have more powerful statistics than the supremum class. The details of the statistics used by PROC SPP are discussed in the following subsection.

After the EDF test statistics are computed, the SPP procedure computes the associated significance values. In the scope of the PROC SPP analysis, the true distribution function, $F(X)$, is a completely specified distribution. For computations in this scenario, PROC SPP applies slightly modified D and $W^2$ statistics, as described by D’Agostino and Stephens (1986).