The LOGISTIC Procedure

Overdispersion

For a correctly specified model, the Pearson chi-square statistic and the deviance, divided by their degrees of freedom, should be approximately equal to one. When their values are much larger than one, the assumption of binomial variability might not be valid and the data are said to exhibit overdispersion. Underdispersion, which results in the ratios being less than one, occurs less often in practice.

When fitting a model, there are several problems that can cause the goodness-of-fit statistics to exceed their degrees of freedom. Among these are such problems as outliers in the data, using the wrong link function, omitting important terms from the model, and needing to transform some predictors. These problems should be eliminated before proceeding to use the following methods to correct for overdispersion.

Rescaling the Covariance Matrix

One way of correcting overdispersion is to multiply the covariance matrix by a dispersion parameter. This method assumes that the sample sizes in each subpopulation are approximately equal. You can supply the value of the dispersion parameter directly, or you can estimate the dispersion parameter based on either the Pearson chi-square statistic or the deviance for the fitted model.

The Pearson chi-square statistic $\chi _ P^2$ and the deviance $\chi _ D^2$ are given by

\begin{eqnarray*}  \chi _ P^2 & =&  \sum _{i=1}^ m \sum _{j=1}^{k+1} \frac{(r_{ij} - n_ i{\widehat{\pi }}_{ij})^2}{n_ i{\widehat{\pi }}_{ij}} \\ \chi _ D^2 & =&  2 \sum _{i=1}^ m \sum _{j=1}^{k+1} r_{ij} \log \left(\frac{r_{ij}}{n_ i{\widehat{\pi }}_{ij}}\right) \end{eqnarray*}

where m is the number of subpopulation profiles, $k+1$ is the number of response levels, $r_{ij}$ is the total weight (sum of the product of the frequencies and the weights) associated with jth level responses in the ith profile, $n_ i = \sum _{j=1}^{k+1}r_{ij}$, and ${\widehat{\pi }}_{ij}$ is the fitted probability for the jth level at the ith profile. Each of these chi-square statistics has $mk - p$ degrees of freedom, where p is the number of parameters estimated. The dispersion parameter is estimated by

\[  \widehat{\sigma ^2} = \left\{  \begin{array}{ll} \chi _ P^2/(mk-p) &  \mbox{ SCALE=PEARSON} \\ \chi _ D^2/(mk-p) &  \mbox{ SCALE=DEVIANCE} \\ (\mi {constant})^2 &  \mbox{ SCALE=}\mi {constant} \end{array} \right.  \]

In order for the Pearson statistic and the deviance to be distributed as chi-square, there must be sufficient replication within the subpopulations. When this is not true, the data are sparse, and the p-values for these statistics are not valid and should be ignored. Similarly, these statistics, divided by their degrees of freedom, cannot serve as indicators of overdispersion. A large difference between the Pearson statistic and the deviance provides some evidence that the data are too sparse to use either statistic.

You can use the AGGREGATE (or AGGREGATE=) option to define the subpopulation profiles. If you do not specify this option, each observation is regarded as coming from a separate subpopulation. For events/trials syntax, each observation represents n Bernoulli trials, where n is the value of the trials variable; for single-trial syntax, each observation represents a single trial. Without the AGGREGATE (or AGGREGATE=) option, the Pearson chi-square statistic and the deviance are calculated only for events/trials syntax.

Note that the parameter estimates are not changed by this method. However, their standard errors are adjusted for overdispersion, affecting their significance tests.

Williams’ Method

Suppose that the data consist of n binomial observations. For the ith observation, let $r_ i/n_ i$ be the observed proportion and let $\mb {x}_ i$ be the associated vector of explanatory variables. Suppose that the response probability for the ith observation is a random variable $P_ i$ with mean and variance

\[  E(P_ i) = \pi _ i \quad \mbox{and} \quad V(P_ i) = \phi \pi _ i (1-\pi _ i)  \]

where $p_ i$ is the probability of the event, and $\phi $ is a nonnegative but otherwise unknown scale parameter. Then the mean and variance of $r_ i$ are

\[  E(r_ i) = n_ i \pi _ i \quad \mbox{and} \quad V(r_ i) = n_ i \pi _ i (1-\pi _ i) [1 + (n_ i - 1) \phi ]  \]

Williams (1982) estimates the unknown parameter $\phi $ by equating the value of Pearson’s chi-square statistic for the full model to its approximate expected value. Suppose $w_ i^*$ is the weight associated with the ith observation. The Pearson chi-square statistic is given by

\[  \chi ^2 = \sum _{i=1}^ n \frac{w_ i^*(r_ i - n_ i {\widehat{\pi }}_ i)^2}{n_ i {\widehat{\pi }}_ i (1 - {\widehat{\pi }}_ i)}  \]

Let $g’(\cdot )$ be the first derivative of the link function $g(\cdot )$. The approximate expected value of $\chi ^2$ is

\[  E_{\chi ^2} = \sum _{i=1}^ n w_ i^* ( 1 - w_ i^* v_ i d_ i)[1 + \phi (n_ i - 1)]  \]

where $v_ i=n_ i/(\pi _ i(1-\pi _ i)[g’(\pi _ i)]^2)$ and $d_ i$ is the variance of the linear predictor ${\widehat{\alpha }_{i}} + \mb {x}_ i’{\widehat{\bbeta }}$. The scale parameter $\phi $ is estimated by the following iterative procedure.

At the start, let $w_ i^*=1$ and let $\pi _ i$ be approximated by $r_ i/n_ i$, $i=1,2,\ldots ,n$. If you apply these weights and approximated probabilities to $\chi ^2$ and $E_{\chi ^2}$ and then equate them, an initial estimate of $\phi $ is

\[  \hat{\phi }_0 = \frac{\chi ^2 - (n - p)}{\sum _ i (n_ i - 1)(1-v_ id_ i)}  \]

where p is the total number of parameters. The initial estimates of the weights become $\hat{w}^*_{i0} = [1 + (n_ i - 1)\hat{\phi }_0]^{-1}$. After a weighted fit of the model, the $\widehat{\alpha }_{i}$ and ${\widehat{\bbeta }}$ are recalculated, and so is $\chi ^2$. Then a revised estimate of $\phi $ is given by

\[  \hat{\phi }_1 = \frac{\chi ^2 - \sum _ i w_ i^*(1-w_ i^*v_ id_ i)}{ w_ i^*(n_ i-1)(1-w_ i^*v_ id_ i)}  \]

The iterative procedure is repeated until $\chi ^2$ is very close to its degrees of freedom.

Once $\phi $ has been estimated by $\hat{\phi }$ under the full model, weights of $(1 + (n_ i-1)\hat{\phi })^{-1}$ can be used to fit models that have fewer terms than the full model. See Example 58.10 for an illustration.

Note: If the WEIGHT statement is specified with the NORMALIZE option, then the initial $w_ i^*$ values are set to the normalized weights, and the weights resulting from Williams’ method will not add up to the actual sample size. However, the estimated covariance matrix of the parameter estimates remains invariant to the scale of the WEIGHT variable.