The GENMOD Procedure

Generalized Estimating Equations

Let $y_{ij}$, $j=1,\ldots ,n_ i$, $i=1,\ldots ,K$, represent the jth measurement on the ith subject. There are $n_ i$ measurements on subject i and $\sum _{i=1}^ K n_ i$ total measurements.

Correlated data are modeled using the same link function and linear predictor setup (systematic component) as the independence case. The random component is described by the same variance functions as in the independence case, but the covariance structure of the correlated measurements must also be modeled. Let the vector of measurements on the ith subject be $\mb {Y}_{i}=[y_{i1}, \ldots , y_{i n_ i}]^{\prime }$ with corresponding vector of means $\bmu _{i}=[\mu _{i1}, \ldots , \mu _{i n_ i}]^{\prime }$, and let $\mb {V}_{i}$ be the covariance matrix of $\mb {Y}_{i}$. Let the vector of independent, or explanatory, variables for the jth measurement on the ith subject be

\[  \mb {x}_{ij} = [x_{ij1}, \ldots , x_{ijp}]^{\prime }  \]

The generalized estimating equation of Liang and Zeger (1986) for estimating the $p \times 1$ vector of regression parameters $\bbeta $ is an extension of the independence estimating equation to correlated data and is given by

\[  \mb {S}(\bbeta ) = \sum _{i=1}^ K \mb {D}^\prime _ i \mb {V}_{i}^{-1} (\mb {Y}_{i}-\bmu _{i}(\bbeta )) = \mb {0}  \]


\[ \mb {D}_ i = \frac{\partial \bmu _ i}{\partial \bbeta }  \]


\[  g(\mu _{ij}) = {\mb {x}_{ij}}^\prime {\bbeta }  \]

where g is the link function, the $p \times n_ i$ matrix of partial derivatives of the mean with respect to the regression parameters for the ith subject is given by

\[  \mb {D}^\prime _ i = \frac{\partial \bmu _{i}^{\prime }}{\partial \bbeta } = \left[\begin{array}{ccc} \displaystyle \frac{x_{i11}}{g^{\prime }(\mu _{i1})} &  \ldots &  \displaystyle \frac{x_{in_ i1}}{g^{\prime }(\mu _{in_ i})} \\ \vdots & &  \vdots \\ \displaystyle \frac{x_{i1p}}{g^{\prime }(\mu _{i1})} &  \ldots &  \displaystyle \frac{x_{in_ ip}}{g^{\prime }(\mu _{in_ i})} \\ \end{array} \right]  \]

Working Correlation Matrix

Let $\mb {R}_ i(\balpha )$ be an $n_ i \times n_ i$ working correlation matrix that is fully specified by the vector of parameters $\balpha $. The covariance matrix of $\mb {Y}_{i}$ is modeled as

\[  \mb {V}_{i} = \phi \mb {A}_ i^{\frac{1}{2}} \mb {W}_ i^{-\frac{1}{2}} \mb {R}(\balpha ) \mb {W}_ i^{-\frac{1}{2}} \mb {A}_ i^{\frac{1}{2}}  \]

where $\mb {A}_ i$ is an $n_ i \times n_ i$ diagonal matrix with $v(\mu _{ij})$ as the jth diagonal element and $\mb {W}_ i$ is an $n_ i \times n_ i$ diagonal matrix with $w_{ij}$ as the jth diagonal, where $w_{ij}$ is a weight specified with the WEIGHT statement. If there is no WEIGHT statement, $w_{ij} = 1$ for all i and j. If $\mb {R}_ i(\balpha )$ is the true correlation matrix of $\mb {Y}_{i}$, then $\mb {V}_{i}$ is the true covariance matrix of $\mb {Y}_{i}$.

The working correlation matrix is usually unknown and must be estimated. It is estimated in the iterative fitting process by using the current value of the parameter vector $\bbeta $ to compute appropriate functions of the Pearson residual

\[  e_{ij}=\frac{y_{ij}-\mu _{ij}}{\sqrt {v(\mu _{ij})/w_{ij}}}  \]

If you specify the working correlation as $\mb {R}_{0} = \mb {I}$, which is the identity matrix, the GEE reduces to the independence estimating equation.

Following are the structures of the working correlation supported by the GENMOD procedure and the estimators used to estimate the working correlations.

Working Correlation Structure




$\mr {Corr}(Y_{ij},Y_{ik}) = r_{jk} $ where $r_{jk}$ is the jkth element of a constant, user-specified correlation matrix $\mb {R}_{0}$.

The working correlation is not estimated in this case.



$ \mr {Corr}(Y_{ij},Y_{ik})= \left\{ \begin{array}{ll} 1 &  j = k \\ 0 &  j \ne k \end{array} \right. $

The working correlation is not estimated in this case.

$\mb {\mi {m}}$-dependent


$ \mr {Corr}(Y_{ij},Y_{i,j+t})= \left\{ \begin{array}{ll} 1 &  t = 0 \\ \alpha _{t} &  t=1,2,\ldots ,m \\ 0 &  t > m \end{array} \right. $

$ \hat{\alpha }_{t} = \frac{1}{(K_ t-p)\phi }\sum _{i=1}^{K} \sum _{j\leq n_ i-t}e_{ij}e_{i,j+t} $


$ K_ t = \sum _{i=1}^{K} (n_ i - t) $



$ \mr {Corr}(Y_{ij},Y_{ik})=\left\{ \begin{array}{ll} 1 &  j = k \\ \alpha &  j \neq k \\ \end{array}\right. $

$ \hat{\alpha } = \frac{1}{(N^{*}-p)\phi }\sum _{i=1}^{K} \sum _{j < k}e_{ij}e_{ik} $


$ N^{*}=0.5\sum _{i=1}^{K} n_ i(n_ i-1) $



$ \mr {Corr}(Y_{ij},Y_{ik})= \left\{  \begin{array}{ll} 1 &  j = k \\ \alpha _{jk} &  j \neq k \\ \end{array}\right. $

$ \hat{\alpha }_{jk} = \frac{1}{(K-p)\phi }\sum _{i=1}^{K}e_{ij}e_{ik} $

Autoregressive AR(1)


$\mr {Corr}(Y_{ij},Y_{i,j+t})= \alpha ^{t}$ for $t=0,1,2,\ldots ,n_ i-j $

$ \hat{\alpha } = \frac{1}{(K_1-p)\phi }\sum _{i=1}^ K \sum _{j\leq n_ i-1}e_{ij}e_{i,j+1} $


$ K_1 = \sum _{i=1}^{K} (n_ i - 1) $

Dispersion Parameter

The dispersion parameter $\phi $ is estimated by

\[  \hat{\phi } = \frac{1}{N-p} \sum _{i=1}^ K \sum _{j=1}^{n_ i} e_{ij}^2  \]

where $N = \sum _{i=1}^ K n_ i$ is the total number of measurements and p is the number of regression parameters.

The square root of $\hat{\phi }$ is reported by PROC GENMOD as the scale parameter in the Analysis of GEE Parameter Estimates Model-Based Standard Error Estimates output table. If a fixed scale parameter is specified with the NOSCALE option in the MODEL statement, then the fixed value is used in estimating the model-based covariance matrix and standard errors.

Fitting Algorithm

The following is an algorithm for fitting the specified model by using GEEs. Note that this is not in general a likelihood-based method of estimation, so that inferences based on likelihoods are not possible for GEE methods.

  1. Compute an initial estimate of $\bbeta $ with an ordinary generalized linear model assuming independence.

  2. Compute the working correlations $\mb {R}$ based on the standardized residuals, the current $\bbeta $, and the assumed structure of $\mb {R}$.

  3. Compute an estimate of the covariance:

    \[  \mb {V}_{i} = \phi \mb {A}_ i^{\frac{1}{2}} \mb {W}_ i^{-\frac{1}{2}} \hat{\mb {R}}(\balpha ) \mb {W}_ i^{-\frac{1}{2}} \mb {A}_ i^{\frac{1}{2}}  \]
  4. Update $\bbeta $:

    \[  \bbeta _{r+1} = \bbeta _{r} + \left[\sum _{i=1}^{K} \frac{\partial \bmu _ i}{\partial \bbeta }^{\prime } \mb {V}_{i}^{-1} \frac{\partial \bmu _ i}{\partial \bbeta } \right]^{-1} \left[ \sum _{i=1}^ K \frac{\partial \bmu _ i}{\partial \bbeta }^{\prime }\mb {V}_{i}^{-1} (\mb {Y}_{i}-\bmu _ i) \right]  \]
  5. Repeat steps 2-4 until convergence.

Missing Data

See Diggle, Liang, and Zeger (1994, Chapter 11) for a discussion of missing values in longitudinal data. Suppose that you intend to take measurements $Y_{i1}, \ldots , Y_{in}$ for the ith unit. Missing values for which $Y_{ij}$ are missing whenever $Y_{ik}$ is missing for all $j \ge k$ are called dropouts. Otherwise, missing values that occur intermixed with nonmissing values are intermittent missing values. The GENMOD procedure can estimate the working correlation from data containing both types of missing values by using the all available pairs method, in which all nonmissing pairs of data are used in the moment estimators of the working correlation parameters defined previously. The resulting covariances and standard errors are valid under the missing completely at random (MCAR) assumption.

For example, for the unstructured working correlation model,

\[  \hat{\alpha }_{jk} = \frac{1}{(K^{\prime }-p) \phi } \sum e_{ij} e_{ik}  \]

where the sum is over the units that have nonmissing measurements at times j and k, and $K^{\prime }$ is the number of units with nonmissing measurements at j and k. Estimates of the parameters for other working correlation types are computed in a similar manner, using available nonmissing pairs in the appropriate moment estimators.

The contribution of the ith unit to the parameter update equation is computed by omitting the elements of $(\mb {Y}_{i} - \mbox{\boldmath $\mu $}_{i})$, the columns of $\mb {D}_ i^{\prime }=\frac{\partial \bmu }{\partial \bbeta }^{\prime }$, and the rows and columns of $\mb {V}_{i}$ corresponding to missing measurements.

Parameter Estimate Covariances

The model-based estimator of $\mbox{Cov}(\hat{\bbeta })$ is given by

\[  \bSigma _{m}(\hat{\bbeta }) = \mb {I}_0^{-1}  \]


\[  \mb {I}_0 = \sum _{i=1}^ K \frac{\partial \mbox{\boldmath $\mu $}_{i}}{\partial \bbeta }^{\prime }\mb {V}_{i}^{-1} \frac{\partial \mbox{\boldmath $\mu $}_{i}}{\partial \bbeta }  \]

This is the GEE equivalent of the inverse of the Fisher information matrix that is often used in generalized linear models as an estimator of the covariance estimate of the maximum likelihood estimator of $\bbeta $. It is a consistent estimator of the covariance matrix of $\hat{\bbeta }$ if the mean model and the working correlation matrix are correctly specified.

The estimator

\[  \bSigma _{e} = \mb {I}_0^{-1} \mb {I}_1 \mb {I}_0^{-1}  \]

is called the empirical, or robust, estimator of the covariance matrix of $\hat{\bbeta }$, where

\[  \mb {I}_{1} = \sum _{i=1}^ K \frac{\partial \bmu _ i}{\partial \bbeta }^{\prime }\mb {V}_{i}^{-1} \mr {Cov}(\mb {Y}_{i}) \mb {V}_{i}^{-1} \frac{\partial \bmu _ i}{\partial \bbeta }  \]

It has the property of being a consistent estimator of the covariance matrix of $\hat{\bbeta }$, even if the working correlation matrix is misspecified—that is, if $\mr {Cov}(\mb {Y}_{i}) \neq \mb {V}_{i}$. For further information about the robust variance estimate, see Zeger, Liang, and Albert (1988); Royall (1986); White (1982). In computing $\bSigma _ e$, $\bbeta $ and $\phi $ are replaced by estimates, and $\mr {Cov}(\mb {Y}_{i})$ is replaced by the estimate

\[  (\mb {Y}_{i} - \bmu _{i}(\hat{\bbeta }))(\mb {Y}_{i} - \bmu _{i}(\hat{\bbeta }))^{\prime }  \]

Multinomial GEEs

Lipsitz, Kim, and Zhao (1994) and Miller, Davis, and Landis (1993) describe how to extend GEEs to multinomial data. Currently, only the independent working correlation is available for multinomial models in PROC GENMOD.

Alternating Logistic Regressions

If the responses are binary (that is, they take only two values), then there is an alternative method to account for the association among the measurements. The alternating logistic regressions (ALR) algorithm of Carey, Zeger, and Diggle (1993) models the association between pairs of responses with log odds ratios, instead of with correlations, as ordinary GEEs do.

For binary data, the correlation between the jth and kth response is, by definition,

\[  \mr {Corr}(Y_{ij},Y_{ik}) = \frac{\Pr (Y_{ij}=1,Y_{ik}=1)-\mu _{ij}\mu _{ik}}{\sqrt {\mu _{ij}(1-\mu _{ij})\mu _{ik}(1-\mu _{ik})}}  \]

The joint probability in the numerator satisfies the following bounds, by elementary properties of probability, since $\mu _{ij}=\Pr (Y_{ij}=1)$:

\[  \max (0,\mu _{ij}+\mu _{ik}-1) \leq \Pr (Y_{ij}=1,Y_{ik}=1) \leq \min ( \mu _{ij},\mu _{ik})  \]

The correlation, therefore, is constrained to be within limits that depend in a complicated way on the means of the data.

The odds ratio, defined as

\[  \mr {OR}(Y_{ij},Y_{ik}) = \frac{\Pr (Y_{ij}=1,Y_{ik}=1)\Pr (Y_{ij}=0,Y_{ik}=0)}{\Pr (Y_{ij}=1,Y_{ik}=0)\Pr (Y_{ij}=0,Y_{ik}=1)}  \]

is not constrained by the means and is preferred, in some cases, to correlations for binary data.

The ALR algorithm seeks to model the logarithm of the odds ratio, $\gamma _{ijk} = \log (\mr {OR}(Y_{ij},Y_{ik}))$, as

\[  \gamma _{ijk} = \mb {z}_{ijk}’\balpha  \]

where $\balpha $ is a $q \times 1$ vector of regression parameters and $\mb {z}_{ijk}$ is a fixed, specified vector of coefficients.

The parameter $\gamma _{ijk}$ can take any value in $(-\infty , \infty )$ with $\gamma _{ijk} = 0$ corresponding to no association.

The log odds ratio, when modeled in this way with a regression model, can take different values in subgroups defined by $\mb {z}_{ijk}$. For example, $\mb {z}_{ijk}$ can define subgroups within clusters, or it can define block effects between clusters.

You specify a GEE model for binary data that uses log odds ratios by specifying a model for the mean, as in ordinary GEEs, and a model for the log odds ratios. You can use any of the link functions appropriate for binary data in the model for the mean, such as logistic, probit, or complementary log-log. The ALR algorithm alternates between a GEE step to update the model for the mean and a logistic regression step to update the log odds ratio model. Upon convergence, the ALR algorithm provides estimates of the regression parameters for the mean, $\bbeta $, the regression parameters for the log odds ratios, $\balpha $, their standard errors, and their covariances.

Specifying Log Odds Ratio Models

Specifying a regression model for the log odds ratio requires you to specify rows of the $\mb {z}$ matrix $\mb {z}_{ijk}$ for each cluster i and each unique within-cluster pair $(j, k)$. The GENMOD procedure provides several methods of specifying $\mb {z}_{ijk}$. These are controlled by the LOGOR=keyword and associated options in the REPEATED statement. The supported keywords and the resulting log odds ratio models are described as follows.


specifies exchangeable log odds ratios. In this model, the log odds ratio is a constant for all clusters i and pairs $(j,k)$. The parameter $\alpha $ is the common log odds ratio.

\[ \mb {z}_{ijk} = 1 \; \;  \mbox{ for all } \; \;  i, j, k  \]

specifies fully parameterized clusters. Each cluster is parameterized in the same way, and there is a parameter for each unique pair within clusters. If a complete cluster is of size n, then there are $\frac{n(n-1)}{2}$ parameters in the vector $\balpha $. For example, if a full cluster is of size 4, then there are $\frac{4 \times 3}{2}=6$ parameters, and the $\mb {z}$ matrix is of the form

\[  \bZ = \left[ \begin{array}{cccccc} 1 &  0 &  0 &  0 &  0 &  0 \\ 0 &  1 &  0 &  0 &  0 &  0 \\ 0 &  0 &  1 &  0 &  0 &  0 \\ 0 &  0 &  0 &  1 &  0 &  0 \\ 0 &  0 &  0 &  0 &  1 &  0 \\ 0 &  0 &  0 &  0 &  0 &  1 \\ \end{array} \right]  \]

The elements of $\balpha $ correspond to log odds ratios for cluster pairs in the following order:
















specifies log odds ratios by cluster. The argument variable is a variable name that defines the block effects between clusters. The log odds ratios are constant within clusters, but they take a different value for each different value of the variable. For example, if Center is a variable in the input data set taking a different value for k treatment centers, then specifying LOGOR=LOGORVAR(Center) requests a model with different log odds ratios for each of the k centers, constant within center.


specifies k-nested log odds ratios. You must also specify the SUBCLUST=variable option to define subclusters within clusters. Within each cluster, PROC GENMOD computes a log odds ratio parameter for pairs having the same value of variable for both members of the pair and one log odds ratio parameter for each unique combination of different values of variable.


specifies 1-nested log odds ratios. You must also specify the SUBCLUST=variable option to define subclusters within clusters. There are two log odds ratio parameters for this model. Pairs having the same value of variable correspond to one parameter; pairs having different values of variable correspond to the other parameter. For example, if clusters are hospitals and subclusters are wards within hospitals, then patients within the same ward have one log odds ratio parameter, and patients from different wards have the other parameter.


specifies the full $\mb {z}$ matrix. You must also specify a SAS data set containing the $\mb {z}$ matrix with the ZDATA=data-set-name option. Each observation in the data set corresponds to one row of the $\mb {z}$ matrix. You must specify the ZDATA data set as if all clusters are complete—that is, as if all clusters are the same size and there are no missing observations. The ZDATA data set has $K [n_{\mathit{max}}(n_{\mathit{max}}-1)/2]$ observations, where K is the number of clusters and $n_{\mathit{max}}$ is the maximum cluster size. If the members of cluster i are ordered as $1,2,\cdots ,n$, then the rows of the $\mb {z}$ matrix must be specified for pairs in the order $(1,2),(1,3),\cdots ,(1,n),(2,3),\cdots ,(2,n),\cdots ,(n-1,n)$. The variables specified in the REPEATED statement for the SUBJECT effect must also be present in the ZDATA= data set to identify clusters. You must specify variables in the data set that define the columns of the $\mb {z}$ matrix by the ZROW=variable-list option. If there are q columns (q variables in variable-list), then there are q log odds ratio parameters. You can optionally specify variables indicating the cluster pairs corresponding to each row of the $\mb {z}$ matrix with the YPAIR=(variable1, variable2) option. If you specify this option, the data from the ZDATA data set are sorted within each cluster by variable1 and variable2. See Example 42.6 for an example of specifying a full $\mb {z}$ matrix.


specifies a replicated $\mb {z}$ matrix. You specify $\mb {z}$ matrix data exactly as you do for the ZFULL case, except that you specify only one complete cluster. The $\mb {z}$ matrix for the one cluster is replicated for each cluster. The number of observations in the ZDATA data set is $\frac{n_{\mathit{max}}(n_{\mathit{max}}-1)}{2}$, where $n_{\mathit{max}}$ is the size of a complete cluster (a cluster with no missing observations).


specifies direct input of the replicated $\mb {z}$ matrix. You specify the $\mb {z}$ matrix for one cluster with the syntax LOGOR=ZREP ( $(y_1\; \;  y_2) z_1\; \;  z_2 \cdots z_ q, \cdots $ ), where $y_1$ and $y_2$ are numbers representing a pair of observations and the values $z_1, z_2, \cdots , z_ q$ make up the corresponding row of the $\mb {z}$ matrix. The number of rows specified is $\frac{n_{\mathit{max}}(n_{\mathit{max}}-1)}{2}$, where $n_{\mathit{max}}$ is the size of a complete cluster (a cluster with no missing observations). For example,

logor =  zrep((1 2) 1 0,
              (1 3) 1 0,
              (1 4) 1 0,
              (2 3) 1 1,
              (2 4) 1 1,
              (3 4) 1 1)

specifies the $\frac{4 \times 3}{2}=6$ rows of the $\mb {z}$ matrix for a cluster of size 4 with q = 2 log odds ratio parameters. The log odds ratio for the pairs (1 2), (1 3), (1 4) is $\alpha _1$, and the log odds ratio for the pairs (2 3), (2 4), (3 4) is $\alpha _1 + \alpha _2$.

Quasi-likelihood Information Criterion

The quasi-likelihood information criterion (QIC) was developed by Pan (2001) as a modification of the Akaike information criterion (AIC) to apply to models fit by GEEs.

Define the quasi-likelihood under the independence working correlation assumption, evaluated with the parameter estimates under the working correlation of interest as

\[  Q(\hat{\bbeta }(R), \phi ) = \sum _{i=1}^ K\sum _{j=1}^{n_ i} Q(\hat{\bbeta }(R), \phi ; (Y_{ij}, \mb {X}_{ij})) \]

where the quasi-likelihood contribution of the jth observation in the ith cluster is defined in the section Quasi-likelihood Functions and $\hat{\bbeta }(R)$ are the parameter estimates obtained from GEEs with the working correlation of interest R.

QIC is defined as

\[  \mr {QIC}(R) = -2Q(\hat{\bbeta }(R), \phi ) + 2\mr {trace}(\hat{\Omega }_ I\hat{V}_ R)  \]

where $\hat{V}_ R$ is the robust covariance estimate and $\hat{\Omega }_ I$ is the inverse of the model-based covariance estimate under the independent working correlation assumption, evaluated at $\hat{\bbeta }(R)$, the parameter estimates obtained from GEEs with the working correlation of interest R.

PROC GENMOD also computes an approximation to $\mr {QIC}(R)$ defined by Pan (2001) as

\[  \mr {QIC}_ u(R) = -2Q(\hat{\bbeta }(R), \phi ) + 2p  \]

where p is the number of regression parameters.

Pan (2001) notes that QIC is appropriate for selecting regression models and working correlations, whereas $\mr {QIC}_ u$ is appropriate only for selecting regression models.

Quasi-likelihood Functions

See McCullagh and Nelder (1989) and Hardin and Hilbe (2003) for discussions of quasi-likelihood functions. The contribution of observation j in cluster i to the quasi-likelihood function evaluated at the regression parameters $\bbeta $ is given by $Q(\bbeta , \phi ; (Y_{ij}, \mb {X}_{ij})) = \frac{Q_{ij}}{\phi }$, where $Q_{ij}$ is defined in the following list. These are used in the computation of the quasi-likelihood information criteria (QIC) for goodness of fit of models fit with GEEs. The $w_{ij}$ are prior weights, if any, specified with the WEIGHT or FREQ statements. Note that the definition of the quasi-likelihood for the negative binomial differs from that given in McCullagh and Nelder (1989). The definition used here allows the negative binomial quasi-likelihood to approach the Poisson as $k\rightarrow 0$.

  • Normal:

    \[  Q_{ij} = -\frac{1}{2} w_{ij}(y_{ij}-\mu _{ij})^2  \]
  • Inverse Gaussian:

    \[  Q_{ij} = \frac{w_{ij}(\mu _{ij} - .5y_{ij})}{\mu _{ij}^2 }  \]
  • Gamma:

    \[  Q_{ij} = -w_{ij}\left[ \frac{y_{ij}}{\mu _{ij}} + \log (\mu _{ij}) \right]  \]
  • Negative binomial:

    \[  Q_{ij} = w_{ij}\left[ \log \Gamma \left( y_{ij} + \frac{1}{k}\right) - \log \Gamma \left(\frac{1}{k}\right) + y_{ij}\log \left(\frac{k\mu _{ij}}{1+k\mu _{ij}}\right) + \frac{1}{k}\log \left(\frac{1}{1+k\mu _{ij}}\right)\right]  \]
  • Poisson:

    \[  Q_{ij} = w_{ij}(y_{ij} \log (\mu _{ij}) - \mu _{ij} )  \]
  • Binomial:

    \[  Q_{ij} = w_{ij}[r_{ij} \log (p_{ij}) + (n_{ij}-r_{ij}) \log (1-p_{ij})]  \]
  • Multinomial (s categories):

    \[  Q_{ij} = w_{ij}\sum _{k=1}^ s y_{ijk}\log (\mu _{ijk})  \]

Generalized Score Statistics

Boos (1992) and Rotnitzky and Jewell (1990) describe score tests applicable to testing $\mb {L}^\prime \bbeta = \mb {0}$ in GEEs, where $\mb {L}^\prime $ is a user-specified $r \times p$ contrast matrix or a contrast for a Type 3 test of hypothesis.

Let $\tilde{\bbeta }$ be the regression parameters resulting from solving the GEE under the restricted model $\mb {L}^\prime \bbeta = \mb {0}$, and let $\mb {S}(\tilde{\bbeta })$ be the generalized estimating equation values at $\tilde{\bbeta }$.

The generalized score statistic is

\[  T = \mb {S}(\tilde{\bbeta })^{\prime }\bSigma _{m}\mb {L} (\mb {L}^\prime \bSigma _{e}\mb {L})^{-1} \mb {L}^\prime \bSigma _{m}\mb {S}(\tilde{\bbeta })  \]

where $\bSigma _{m}$ is the model-based covariance estimate and $\bSigma _{e}$ is the empirical covariance estimate. The p-values for T are computed based on the chi-square distribution with r degrees of freedom.