The LOGISTIC Procedure

Receiver Operating Characteristic Curves

ROC curves are used to evaluate and compare the performance of diagnostic tests; they can also be used to evaluate model fit. An ROC curve is just a plot of the proportion of true positives (events predicted to be events) versus the proportion of false positives (nonevents predicted to be events).

In a sample of n individuals, suppose $n_1$ individuals are observed to have a certain condition or event. Let this group be denoted by ${\mc{C}}_1$, and let the group of the remaining $n_2=n-n_1$ individuals who do not have the condition be denoted by ${\mc{C}}_2$. Risk factors are identified for the sample, and a logistic regression model is fitted to the data. For the jth individual, an estimated probability ${\widehat{\pi }}_ j$ of the event of interest is calculated. Note that the ${\widehat{\pi }}_ j$ are computed as shown in the section Linear Predictor, Predicted Probability, and Confidence Limits and are not the cross validated estimates discussed in the section Classification Table.

Suppose the n individuals undergo a test for predicting the event and the test is based on the estimated probability of the event. Higher values of this estimated probability are assumed to be associated with the event. A receiver operating characteristic (ROC) curve can be constructed by varying the cutpoint that determines which estimated event probabilities are considered to predict the event. For each cutpoint z, the following measures can be output to a data set by specifying the OUTROC= option in the MODEL statement or the OUTROC= option in the SCORE statement:

\begin{eqnarray*}  \_ \textrm{POS}\_ (z) &  = &  \sum _{i \in {\mc{C}}_1} I({\widehat{\pi }}_ i \geq z) \\ \_ \textrm{NEG}\_ (z) &  = &  \sum _{i \in {\mc{C}}_2} I({\widehat{\pi }}_ i < z) \\ \_ \textrm{FALPOS}\_ (z) &  = &  \sum _{i \in {\mc{C}}_2} I({\widehat{\pi }}_ i \geq z) \\ \_ \textrm{FALNEG}\_ (z) &  = &  \sum _{i \in {\mc{C}}_1} I({\widehat{\pi }}_ i < z) \\ \_ \textrm{SENSIT}\_ (z) & =&  \frac{\_ \textrm{POS}\_ (z)}{n_1} \\ \_ \textrm{1MSPEC}\_ (z) & =&  \frac{\_ \textrm{FALPOS}\_ (z)}{n_2} \end{eqnarray*}

where $ I(\cdot )$ is the indicator function.

Note that _POS_(z) is the number of correctly predicted event responses, _NEG_(z) is the number of correctly predicted nonevent responses, _FALPOS_(z) is the number of falsely predicted event responses, _FALNEG_(z) is the number of falsely predicted nonevent responses, _SENSIT_(z) is the sensitivity of the test, and _1MSPEC_(z) is one minus the specificity of the test.

The ROC curve is a plot of sensitivity (_SENSIT_) against 1–specificity (_1MSPEC_). The plot can be produced by using the PLOTS option or by using the GPLOT or SGPLOT procedure with the OUTROC= data set. See Example 60.7 for an illustration. The area under the ROC curve, as determined by the trapezoidal rule, is estimated by the concordance index, c, in the "Association of Predicted Probabilities and Observed Responses" table.

Comparing ROC Curves

ROC curves can be created from each model fit in a selection routine, from the specified model in the MODEL statement, from specified models in ROC statements, or from input variables which act as ${\widehat{\pi }}$ in the preceding discussion. Association statistics are computed for these models, and the models are compared when the ROCCONTRAST statement is specified. The ROC comparisons are performed by using a contrast matrix to take differences of the areas under the empirical ROC curves (DeLong, DeLong, and Clarke-Pearson, 1988). For example, if you have three curves and the second curve is the reference, the contrast used for the overall test is

\[  {\bL _1} = \left(\begin{array}{r} \bm {l}_1’\\ \bm {l}_2’\end{array}\right) = \left[\begin{array}{rrr} 1 &  -1 &  0\\ 0 &  -1 &  1\\ \end{array}\right]  \]

and you can optionally estimate and test each row of this contrast, in order to test the difference between the reference curve and each of the other curves. If you do not want to use a reference curve, the global test optionally uses the following contrast:

\[  {\bL _2} = \left(\begin{array}{r} \bm {l}_1’\\ \bm {l}_2’\end{array}\right) = \left[\begin{array}{rrr} 1 &  -1 &  0\\ 0 &  1 &  -1\\ \end{array}\right]  \]

You can also specify your own contrast matrix. Instead of estimating the rows of these contrasts, you can request that the difference between every pair of ROC curves be estimated and tested. Demler, Pencina, and D’Agostino (2012) caution that testing the difference in the AUC between two nested models is not a valid approach if the added predictor is not significantly associated with the response; in any case, if you use this approach, you are more likely to fail to reject the null.

By default for the reference contrast, the specified or selected model is used as the reference unless the NOFIT option is specified in the MODEL statement, in which case the first ROC model is the reference.

In order to label the contrasts, a name is attached to every model. The name for the specified or selected model is the MODEL statement label, or "Model" if the MODEL label is not present. The ROC statement models are named with their labels, or as "ROCi" for the ith ROC statement if a label is not specified. The contrast ${\bL _1}$ is labeled as "Reference = ModelName", where ModelName is the reference model name, while ${\bL _2}$ is labeled "Adjacent Pairwise Differences". The estimated rows of the contrast matrix are labeled "ModelName1 – ModelName2". In particular, for the rows of ${\bL _1}$, ModelName2 is the reference model name. If you specify your own contrast matrix, then the contrast is labeled "Specified" and the ith contrast row estimates are labeled "Rowi".

If ODS Graphics is enabled, then all ROC curves are displayed individually and are also overlaid in a final display. If a selection method is specified, then the curves produced in each step of the model selection process are overlaid onto a single plot and are labeled "Stepi", and the selected model is displayed on a separate plot and on a plot with curves from specified ROC statements. See Example 60.8 for an example.

ROC Computations

The trapezoidal area under an empirical ROC curve is equal to the Mann-Whitney two-sample rank measure of association statistic (a generalized U-statistic) applied to two samples, $\{ X_ i\} , i=1,\ldots ,n_1$, in ${\mc{C}}_1$ and $\{ Y_ i\} , i=1,\ldots ,n_2$, in ${\mc{C}}_2$. PROC LOGISTIC uses the predicted probabilities in place of $\bX $ and $\bY $; however, in general any criterion could be used. Denote the frequency of observation i in ${\mc{C}}_ k$ as $f_{ki}$, and denote the total frequency in ${\mc{C}}_ k$ as $F_ k$. The WEIGHTED option replaces $f_{ki}$ with $f_{ki}w_{ki}$, where $w_{ki}$ is the weight of observation i in group ${\mc{C}}_ k$. The trapezoidal area under the curve is computed as

\begin{eqnarray*}  \hat{c} & =&  \frac{1}{F_1F_2}\sum _{i=1}^{n_1}\sum _{j=1}^{n_2}\psi (X_ i,Y_ j)f_{1i}f_{2j} \\ \psi (X,Y) & =&  \left\{  \begin{array}{ll} 1 &  Y < X \\ \frac{1}{2} &  Y=X \\ 0 &  Y > X \end{array} \right. \end{eqnarray*}

so that $E(\hat{c}) = \Pr (Y<X) + \frac{1}{2}\Pr (Y=X)$. Note that the concordance index, c, in the "Association of Predicted Probabilities and Observed Responses" table does not use weights unless both the WEIGHTED and BINWIDTH=0 options are specified. Also, in this table, c is computed by creating 500 bins and binning the $X_ i$ and $Y_ j$; this results in more ties than the preceding method (unless the BINWIDTH=0 or ROCEPS=0 option is specified), so c is not necessarily equal to $E(\hat{c})$.

To compare K empirical ROC curves, first compute the trapezoidal areas. Asymptotic normality of the estimated area follows from U-statistic theory, and a covariance matrix $\bS $ can be computed; For more information, see DeLong, DeLong, and Clarke-Pearson (1988). A Wald confidence interval for the rth area, $1\le r\le K$, can be constructed as

\[  \hat{c}_ r \pm z_{1-\frac{\alpha }{2}}s_{r,r}  \]

where $s_{r,r}$ is the rth diagonal of $\bS $.

For a contrast of ROC curve areas, $\bL \mb{c}$, the statistic

\[  (\hat{\mb{c}}-\mb{c})’\bL ’\left[ \bL \bS \bL ’ \right]^{-1}\bL (\hat{\mb{c}}-\mb{c})  \]

has a chi-square distribution with df=rank($\bL \bS \bL ’$). For a row of the contrast, $\bm {l}’\mb{c}$,

\[  \frac{\bm {l}'\hat{\mb{c}}-\bm {l}'\mb{c}}{\left[\bm {l}'\bS \bm {l}\right]^{1/2}}  \]

has a standard normal distribution. The corresponding confidence interval is

\[  \bm {l}’\hat{\mb{c}} \pm z_{1-\frac{\alpha }{2}}{\left[\bm {l}’\bS \bm {l}\right]^{1/2}}  \]