The LOGISTIC Procedure

Exact Conditional Logistic Regression

The theory of exact logistic regression, also known as exact conditional logistic regression, was originally laid out by Cox (1970), and the computational methods employed in PROC LOGISTIC are described in Hirji, Mehta, and Patel (1987); Hirji (1992); Mehta, Patel, and Senchaudhuri (1992). Other useful references for the derivations include Cox and Snell (1989); Agresti (1990); Mehta and Patel (1995).

Exact conditional inference is based on generating the conditional distribution for the sufficient statistics of the parameters of interest. This distribution is called the permutation or exact conditional distribution. Using the notation in the section Computational Details, follow Mehta and Patel (1995) and first note that the sufficient statistics $\bT = (T_1,\ldots ,T_ p)$ for the parameter vector of intercepts and slopes, $\bbeta $, are

\[ T_ j = \sum _{i=1}^ n y_ ix_{ij}, \quad j=1,\ldots ,p \]

Denote a vector of observable sufficient statistics as $\mb{t} = (t_1,\ldots ,t_ p)’$.

The probability density function (PDF) for $\bT $ can be created by summing over all binary sequences $\mb{y}$ that generate an observable ${\mb{t}}$ and letting $C({\mb{t}})= ||\{ \mb{y} : \mb{y}’\bX ={\mb{t}}’\} ||$ denote the number of sequences $\mb{y}$ that generate ${\mb{t}}$

\[ \Pr (\bT =\mb{t}) = \frac{C({\mb{t}})\exp ({\mb{t}}'\bbeta )}{\prod _{i=1}^ n[1+\exp (\mb{x}_ i'\bbeta )]} \]

In order to condition out the nuisance parameters, partition the parameter vector $\bbeta =(\bbeta _\mr {N}’,\bbeta _\mr {I}’)’$, where $\bbeta _\mr {N}$ is a $p_\mr {N}\times 1$ vector of the nuisance parameters, and $\bbeta _\mr {I}$ is the parameter vector for the remaining $p_\mr {I} = p - p_\mr {N}$ parameters of interest. Likewise, partition $\bX $ into $\bX _\mr {N}$ and $\bX _\mr {I}$, $\bT $ into $\bT _\mr {N}$ and $\bT _\mr {I}$, and $\mb{t}$ into $\mb{t}_\mr {N}$ and $\mb{t}_\mr {I}$. The nuisance parameters can be removed from the analysis by conditioning on their sufficient statistics to create the conditional likelihood of $\bT _\mr {I}$ given $\bT _\mr {N} = \mb{t}_\mr {N}$,

\begin{eqnarray*} & & \Pr (\bT _\mr {I}={\mb{t}}_\mr {I}|\bT _\mr {N}={\mb{t}}_\mr {N}) = \frac{\Pr (\bT ={\mb{t}})}{\Pr (\bT _\mr {N}={\mb{t}}_\mr {N})}\\ & & =f_{{\bbeta }_\mr {I}}(\mb{t}_\mr {I}|\mb{t}_\mr {N}) =\frac{C(\mb{t}_\mr {N},\mb{t}_\mr {I})\exp (\mb{t}_\mr {I}'\bbeta _\mr {I})}{\sum _{u}C(\mb{t}_\mr {N},\mb{u})\exp (\mb{u}'\bbeta _\mr {I})} \end{eqnarray*}

where $C(\mb{t}_\mr {N},\mb{u})$ is the number of vectors $\mb{y}$ such that $\mb{y}’\bX _\mr {N}=\mb{t}_\mr {N}$ and $\mb{y}’\bX _\mr {I}=\mb{u}$. Note that the nuisance parameters have factored out of this equation, and that $C(\mb{t}_\mr {N},\mb{t}_\mr {I})$ is a constant.

The goal of the exact conditional analysis is to determine how likely the observed response $\mb{y}_{0}$ is with respect to all $2^ n$ possible responses $\mb{y}= (y_1,\ldots ,y_ n)’$. One way to proceed is to generate every $\mb{y}$ vector for which $\mb{y}’\bX _\mr {N} = \mb{t}_\mr {N}$, and count the number of vectors $\mb{y}$ for which $\mb{y}’\bX _\mr {I}$ is equal to each unique $\mb{t}_\mr {I}$. Generating the conditional distribution from complete enumeration of the joint distribution is conceptually simple; however, this method becomes computationally infeasible very quickly. For example, if you had only 30 observations, you would have to scan through $2^{30}$ different $\mb{y}$ vectors.

Several algorithms are available in PROC LOGISTIC to generate the exact distribution. All of the algorithms are based on the following observation. Given any $\mb{y}=(y_1,\ldots ,y_{n})’$ and a design $\bX =(\mb{x}_1,\ldots ,\mb{x}_{n})’$, let $\mb{y}_{(i)}=(y_1,\ldots ,y_ i)’$ and $\bX _{(i)}=(\mb{x}_{1},\ldots ,\mb{x}_{i})’$ be the first i rows of each matrix. Write the sufficient statistic based on these i rows as $\mb{t}’_{(i)}=\mb{y}_{(i)}’\bX _{(i)}$. A recursion relation results: $\mb{t}_{(i+1)} = \mb{t}_{(i)} + y_{i+1}\mb{x}_{i+1}$.

The following methods are available:

  • The multivariate shift algorithm developed by Hirji, Mehta, and Patel (1987), which steps through the recursion relation by adding one observation at a time and building an intermediate distribution at each step. If it determines that $\mb{t}_{(i)}$ for the nuisance parameters could eventually equal $\mb{t}$, then $\mb{t}_{(i)}$ is added to the intermediate distribution.

  • An extension of the multivariate shift algorithm to generalized logit models by Hirji (1992). Because the generalized logit model fits a new set of parameters to each logit, the number of parameters in the model can easily get too large for this algorithm to handle. Note for these models that the hypothesis tests for each effect are computed across the logit functions, while individual parameters are estimated for each logit function.

  • A network algorithm described in Mehta, Patel, and Senchaudhuri (1992), which builds a network for each parameter that you are conditioning out in order to identify feasible $y_{i}$ for the $\mb{y}$ vector. These networks are combined and the set of feasible $y_{i}$ is further reduced, and then the multivariate shift algorithm uses this knowledge to build the exact distribution without adding as many intermediate $\mb{t}_{(i+1)}$ as the multivariate shift algorithm does.

  • A hybrid Monte Carlo and network algorithm described by Mehta, Patel, and Senchaudhuri (2000), which extends their 1992 algorithm by sampling from the combined network to build the exact distribution.

  • A Markov chain Monte Carlo algorithm described by Forster, McDonald, and Smith (2003), which generates the exact distribution by repeatedly perturbing the response vector to obtain a new response vector while maintaining the sufficient statistics for the nuisance parameters, and the resulting $\mb{t}$ are added to the exact distribution.

The bulk of the computation time and memory for these algorithms is consumed by the creation of the networks and the exact joint distribution. After the joint distribution for a set of effects is created, the computational effort required to produce hypothesis tests and parameter estimates for any subset of the effects is (relatively) trivial. See the section Computational Resources for Exact Logistic Regression for more computational notes about exact analyses.

Note: An alternative to using these exact conditional methods is to perform Firth’s bias-reducing penalized likelihood method (see the FIRTH option in the MODEL statement); this method has the advantage of being much faster and less memory intensive than exact algorithms, but it might not converge to a solution.

Hypothesis Tests

Consider testing the null hypothesis $H_0\colon \bbeta _\mr {I}=\bm {0}$ against the alternative $H_ A\colon \bbeta _\mr {I}\neq \bm {0}$, conditional on $\bT _\mr {N}=\mb{t}_\mr {N}$. Under the null hypothesis, the test statistic for the exact probability test is just $f_{\bbeta _\mr {I}=\bm {0}}(\mb{t}_\mr {I}|\mb{t}_\mr {N})$, while the corresponding p-value is the probability of getting a less likely (more extreme) statistic,

\[ p(\mb{t}_\mr {I}|\mb{t}_\mr {N}) = \sum _{u\in \Omega _ p} f_{0}(\mb{u}|\mb{t}_\mr {N}) \]

where $\Omega _ p= \{ \mb{u}\colon $ there exist $\mb{y}$ with $\mb{y}’\bX _\mr {I} = \mb{u}$, $\mb{y}’\bX _\mr {N}=\mb{t}_\mr {N}$, and $f_{0}(\mb{u}|\mb{t}_\mr {N}) \le f_{0}(\mb{t}_\mr {I}|\mb{t}_\mr {N})\} $.

The exact probability test is not necessarily a sum of tail areas and can be inflated if the distribution is skewed. The more robust exact conditional scores test is a sum of tail areas and is usually preferred to the exact probability test. To compute the exact conditional scores test, the conditional mean $\bmu _\mr {I}$ and variance matrix $\bSigma _\mr {I}$ of the $\bT _\mr {I}$ (conditional on $\bT _\mr {N}=\mb{t}_\mr {N}$) are calculated, and the score statistic for the observed value,

\[ s = (\mb{t}_\mr {I}-\bmu _\mr {I})’\bSigma _\mr {I}^{-1}(\mb{t}_\mr {I}-\bmu _\mr {I}) \]

is compared to the score for each member of the distribution

\[ S(\bT _\mr {I}) = (\bT _\mr {I}-\bmu _\mr {I})’\bSigma _\mr {I}^{-1}(\bT _\mr {I}-\bmu _\mr {I}) \]

The resulting p-value is

\[ p(\mb{t}_\mr {I}|\mb{t}_\mr {N}) = \Pr (S\ge s) = \sum _{u\in \Omega _ s} f_{0}(\mb{u}|\mb{t}_\mr {N}) \]

where $\Omega _ s= \{ \mb{u}\colon $ there exist $\mb{y}$ with $\mb{y}’\bX _\mr {I} = \mb{u}$, $\mb{y}’\bX _\mr {N}=\mb{t}_\mr {N}$, and $S(\mb{u}) \ge s\} $.

The mid-p statistic, defined as

\[ p(\mb{t}_\mr {I}|\mb{t}_\mr {N})- \frac{1}{2}f_{0}(\mb{t}_\mr {I}|\mb{t}_\mr {N}) \]

was proposed by Lancaster (1961) to compensate for the discreteness of a distribution. See Agresti (1992) for more information. However, to allow for more flexibility in handling ties, you can write the mid-p statistic as (based on a suggestion by LaMotte (2002) and generalizing Vollset, Hirji, and Afifi (1991))

\[ \sum _{u\in \Omega _<} f_{0}(\mb{u}|\mb{t}_\mr {N}) + \delta _1 f_{0}(\mb{t}_\mr {I}|\mb{t}_\mr {N}) + \delta _2 \sum _{u\in \Omega _=} f_{0}(\mb{u}|\mb{t}_\mr {N}) \]

where, for $i\in \{ p, s\} $, $\Omega _<$ is $\Omega _ i$ using strict inequalities, and $\Omega _=$ is $\Omega _ i$ using equalities with the added restriction that $\mb{u} \ne \mb{t}_\mr {I}$. Letting $(\delta _1,\delta _2)=(0.5,1.0)$ yields Lancaster’s mid-p.

Caution: When the exact distribution has ties and you specify METHOD=NETWORKMC or METHOD=MCMC , the sampling algorithm estimates $p(\mb{t}|\mb{t}_\mr {N})$ with error, and hence it cannot determine precisely which values contribute to the reported p-values. For example, if the exact distribution has densities $\{ 0.2, 0.2, 0.2, 0.4\} $ and if the observed statistic has probability 0.2, then the exact probability p-value is exactly 0.6. Under Monte Carlo sampling, if the densities after N samples are $\{ 0.18, 0.21, 0.23, 0.38\} $ and the observed probability is 0.21, then the resulting p-value is 0.39. Therefore, the exact probability test p-value for this example fluctuates between 0.2, 0.4, and 0.6, and the reported p-values are actually lower bounds for the true p-values. If you need more precise values, you can specify the OUTDIST= option, determine appropriate cutoff values for the observed probability and score, and then construct p-value estimates from the OUTDIST= data set and display them in the SAS log by using the following statements:

data _null_;
   set outdist end=end;
   retain pvalueProb 0 pvalueScore 0;
   if prob < ProbCutOff then pvalueProb+prob;
   if score > ScoreCutOff then pvalueScore+prob;
   if end then put pvalueProb= pvalueScore=;
run;

Because the METHOD=MCMC samples are correlated, the covariance that is computed for the exact conditional scores test is biased. Specifying the NTHIN= option might reduce this bias.

Inference for a Single Parameter

Exact parameter estimates are derived for a single parameter $\beta _ i$ by regarding all the other parameters $\bbeta _\mr {N} = (\beta _1,\ldots ,\beta _{i-1},\beta _{i+1},\ldots ,\beta _{p_\mr {N}+p_\mr {I}})’$ as nuisance parameters. The appropriate sufficient statistics are $\bT _\mr {I}=T_ i$ and $\bT _\mr {N}=(T_1,\ldots ,T_{i-1},T_{i+1},\ldots ,T_{p_\mr {N}+p_\mr {I}})’$, with their observed values denoted by the lowercase t. Hence, the conditional PDF used to create the parameter estimate for $\beta _ i$ is

\[ f_{\beta _ i}(t_ i|\mb{t}_\mr {N}) = \frac{C(\mb{t}_\mr {N},t_ i)\exp (t_ i\beta _ i)}{\sum _{u\in \Omega }C(\mb{t}_\mr {N},u)\exp (u\beta _ i)} \]

for $\Omega = \{ u\colon \mbox{ there exist }\mb{y} \mbox{ with } T_ i=u \mbox{ and }\bT _\mr {N}=\mb{t}_\mr {N}\} $.

The maximum exact conditional likelihood estimate is the quantity $\widehat\beta _ i$, which maximizes the conditional PDF. A Newton-Raphson algorithm is used to perform this search. However, if the observed $t_ i$ attains either its maximum or minimum value in the exact distribution (that is, either $t_ i=\min \{ u:u\in \Omega \} $ or $t_ i=\max \{ u:u\in \Omega \} $), then the conditional PDF is monotonically increasing in $\beta _ i$ and cannot be maximized. In this case, a median unbiased estimate (Hirji, Tsiatis, and Mehta 1989) $\widehat\beta _ i$ is produced that satisfies $f_{\widehat\beta _ i}(t_ i|\mb{t}_\mr {N})=0.5$, and a Newton-Raphson algorithm is used to perform the search.

The standard error of the exact conditional likelihood estimate is just the negative of the inverse of the second derivative of the exact conditional log likelihood (Agresti 2002).

Likelihood ratio tests based on the conditional PDF are used to test the null $H_0\colon \beta _ i = 0$ against the alternative $H_ A\colon \beta _ i > 0$. The critical region for this UMP test consists of the upper tail of values for $T_ i$ in the exact distribution. Thus, the p-value $p_+(t_ i;0)$ for a one-tailed test is

\[ p_+(t_ i;0) = \sum _{u\ge t_ i}f_{0}(u|\mb{t}_\mr {N}) \]

Similarly, the p-value $p_-(t_ i;0)$ for the one-tailed test of $H_0$ against $H_ A\colon \beta _ i < 0$ is

\[ p_-(t_ i;0) = \sum _{u\le t_ i}f_{0}(u|\mb{t}_\mr {N}) \]

The p-value $p(t_ i;0)$ for a two-tailed test of $H_0$ against $H_ A\colon \beta _ i \ne 0$ is

\[ p(t_ i;0) = 2\min [p_-(t_ i;0),p_+(t_ i;0)] \]

The p-value that is reported for a single parameter in the "Exact Parameter Estimates" table is for the two-tailed test.

An upper $100(1-2\epsilon )$% exact confidence limit for $\widehat\beta _ i$ corresponding to the observed $t_ i$ is the solution $\beta _ U(t_ i)$ of $\epsilon = p_-(t_ i,\beta _ U(t_ i))$, while the lower exact confidence limit is the solution $\beta _ L(t_ i)$ of $\epsilon = p_+(t_ i,\beta _ L(t_ i))$. Again, a Newton-Raphson procedure is used to search for the solutions. Note that one of the confidence limits for a median unbiased estimate is set to infinity, but the other is still computed at $\epsilon $. This results in the display of a one-sided $100(1-\epsilon )$% confidence interval; if you want the $2\epsilon $ limit instead, you can specify the ONESIDED option.

Specifying the ONESIDED option displays only one p-value and one confidence interval, because small values of $p_+(t_ i;0)$ and $p_-(t_ i;0)$ support different alternative hypotheses and only one of these p-values can be less than 0.50.

The mid-p confidence limits are the solutions to $\min \{ p_-(t_ i,\beta (t_ i)),p_+(t_ i,\beta (t_ i))\}  - (1-\delta _1)f_{\beta (t_ i)}(t_ i|\mb{t}_\mr {N}) = \epsilon $ for $\epsilon = \alpha /2, 1-\alpha /2$ (Vollset, Hirji, and Afifi 1991). $\delta _1=1$ produces the usual exact (or max-p) confidence interval, $\delta _1=0.5$ yields the mid-p interval, and $\delta _1=0$ gives the min-p interval. The mean of the endpoints of the max-p and min-p intervals provides the mean-p interval as defined by Hirji, Mehta, and Patel (1988).

Estimates and confidence intervals for the odds ratios are produced by exponentiating the estimates and interval endpoints for the parameters.