The MIXED Procedure

Residuals and Influence Diagnostics

Residual Diagnostics

Consider a residual vector of the form $\widetilde{\mb {e}} = \mb {PY}$, where $\mb {P}$ is a projection matrix, possibly an oblique projector. A typical element $\widetilde{e}_ i$ with variance $v_ i$ and estimated variance $\widehat{v}_ i$ is said to be standardized as

\[  \frac{\widetilde{e}_ i}{\sqrt {\mr {Var}[\widetilde{e}_ i]}} = \frac{\widetilde{e}_ i}{\sqrt {v_ i}}  \]

and studentized as

\[  \frac{\widetilde{e}_ i}{\sqrt {\widehat{v}_ i}}  \]

External studentization uses an estimate of $\mr {Var}[\widetilde{e}_ i]$ that does not involve the ith observation. Externally studentized residuals are often preferred over internally studentized residuals because they have well-known distributional properties in standard linear models for independent data.

Residuals that are scaled by the estimated variance of the response, i.e., $\widetilde{e}_ i/\sqrt {\widehat{\mr {Var}}[Y_ i]}$, are referred to as Pearson-type residuals.

Marginal and Conditional Residuals

The marginal and conditional means in the linear mixed model are $\textup{E}[\mb {Y}] = \mb {X}\bbeta $ and $\textup{E}[\mb {Y} | \bgamma ] = \mb {X}\bbeta + \mb {Z}\bgamma $, respectively. Accordingly, the vector $\mb {r}_ m$ of marginal residuals is defined as

\[  \mb {r}_ m = \mb {Y} - \mb {X}\widehat{\bbeta }  \]

and the vector $\mb {r}_ c$ of conditional residuals is

\[  \mb {r}_ c = \mb {Y} - \mb {X}\widehat{\bbeta } - \mb {Z}\widehat{\bgamma } = \mb {r}_ m - \mb {Z}\widehat{\bgamma }  \]

Following Gregoire, Schabenberger, and Barrett (1995), let $\mb {Q} = \mb {X}(\mb {X}’\widehat{\mb {V}}^{-1}\mb {X})^{-}\mb {X}’$ and $\mb {K} = \mb {I} - \mb {Z}\widehat{\mb {G}}\mb {Z}’\widehat{\mb {V}}^{-1}$. Then

$\displaystyle  \widehat{\mr {Var}}[\mb {r}_ m]  $
$\displaystyle = \widehat{\mb {V}} - \mb {Q}  $
$\displaystyle \widehat{\mr {Var}}[\mb {r}_ c]  $
$\displaystyle = \mb {K}(\widehat{\mb {V}} - \mb {Q})\mb {K}’ $

For an individual observation the raw, studentized, and Pearson-type residuals computed by the MIXED procedure are given in Table 59.25.

Table 59.25: Residual Types Computed by the MIXED Procedure

Type of Residual




$r_{mi} = Y_ i - \mb {x}’_ i\widehat{\bbeta }$

$r_{ci} = r_{mi} - \mb {z}’_ i\widehat{\bgamma }$


$r_{mi}^{\mathit{student}} = \frac{r_{mi}}{\sqrt {\widehat{\mr {Var}}[r_{mi}]}}$

$r_{ci}^{\mathit{student}} = \frac{r_{ci}}{\sqrt {\widehat{\mr {Var}}[r_{ci}]}}$


$r_{mi}^{\mathit{pearson}} = \frac{r_{mi}}{\sqrt {\widehat{\mr {Var}}[Y_ i]}}$

$r_{ci}^{\mathit{pearson}} = \frac{r_{ci}}{\sqrt {\widehat{\mr {Var}}[Y_ i|\bgamma ]}}$

When the OUTPM= option is specified in addition to the RESIDUAL option in the MODEL statement, $r_{mi}^{\mathit{student}}$ and $r_{mi}^{\mathit{pearson}}$ are added to the data set as variables Resid, StudentResid, and PearsonResid, respectively. When the OUTP= option is specified, $r_{ci}^{\mathit{student}}$ and $r_{ci}^{\mathit{pearson}}$ are added to the data set. Raw residuals are part of the OUTPM= and OUTP= data sets without the RESIDUAL option.

Scaled Residuals

For correlated data, a set of scaled quantities can be defined through the Cholesky decomposition of the variance-covariance matrix. Since fitted residuals in linear models are rank-deficient, it is customary to draw on the variance-covariance matrix of the data. If $\mr {Var}[\mb {Y}] = \bV $ and $\bC ’\bC =\bV $, then $\mb {C}’^{-1}\mb {Y}$ has uniform dispersion and its elements are uncorrelated.

Scaled residuals in a mixed model are meaningful for quantities based on the marginal distribution of the data. Let $\widehat{\mb {C}}$ denote the Cholesky root of $\widehat{\mb {V}}$, so that $\widehat{\mb {C}}’\widehat{\mb {C}} = \widehat{\mb {V}}$, and define

$\displaystyle  \mb {Y}_{c}  $
$\displaystyle = \widehat{\mb {C}}’^{-1}\mb {Y}  $
$\displaystyle \mb {r}_{m(c)}  $
$\displaystyle = \widehat{\mb {C}}’^{-1}\mb {r}_ m  $

By analogy with other scalings, the inverse Cholesky decomposition can also be applied to the residual vector, $\widehat{\mb {C}}’^{-1}\mb {r}_ m$, although $\mb {V}$ is not the variance-covariance matrix of $\mb {r}_ m$.

To diagnose whether the covariance structure of the model has been specified correctly can be difficult based on $\mb {Y}_{c}$, since the inverse Cholesky transformation affects the expected value of $\mb {Y}_{c}$. You can draw on $\mb {r}_{m(c)}$ as a vector of (approximately) uncorrelated data with constant mean.

When the OUTPM= option in the MODEL statement is specified in addition to the VCIRY option, $\mb {Y}_{c}$ is added as variable ScaledDep and $\mb {r}_{m(c)}$ is added as ScaledResid to the data set.

Influence Diagnostics

Basic Idea and Statistics

The general idea of quantifying the influence of one or more observations relies on computing parameter estimates based on all data points, removing the cases in question from the data, refitting the model, and computing statistics based on the change between full-data and reduced-data estimation. Influence statistics can be coarsely grouped by the aspect of estimation that is their primary target:

  • overall measures compare changes in objective functions: (restricted) likelihood distance (Cook and Weisberg, 1982, Ch. 5.2)

  • influence on parameter estimates: Cook’s D (Cook, 1977, 1979), MDFFITS (Belsley, Kuh, and Welsch, 1980, p. 32)

  • influence on precision of estimates: CovRatio and CovTrace

  • influence on fitted and predicted values: PRESS residual, PRESS statistic (Allen, 1974), DFFITS (Belsley, Kuh, and Welsch, 1980, p. 15)

  • outlier properties: internally and externally studentized residuals, leverage

For linear models for uncorrelated data, it is not necessary to refit the model after removing a data point in order to measure the impact of an observation on the model. The change in fixed effect estimates, residuals, residual sums of squares, and the variance-covariance matrix of the fixed effects can be computed based on the fit to the full data alone. By contrast, in mixed models several important complications arise. Data points can affect not only the fixed effects but also the covariance parameter estimates on which the fixed-effects estimates depend. Furthermore, closed-form expressions for computing the change in important model quantities might not be available.

This section provides background material for the various influence diagnostics available with the MIXED procedure. See the section Mixed Models Theory for relevant expressions and definitions. The parameter vector $\btheta $ denotes all unknown parameters in the $\mb {R}$ and $\mb {G}$ matrix.

The observations whose influence is being ascertained are represented by the set U and referred to simply as the observations in U. The estimate of a parameter vector, such as $\bbeta $, obtained from all observations except those in the set U is denoted $\widehat{\bbeta }_{(U)}$. In case of a matrix $\mb {A}$, the notation $\mb {A}_{(U)}$ represents the matrix with the rows in U removed; these rows are collected in $\mb {A}_ U$. If $\mb {A}$ is symmetric, then notation $\mb {A}_{(U)}$ implies removal of rows and columns. The vector $Y_ U$ comprises the responses of the data points being removed, and $\mb {V}_{(U)}$ is the variance-covariance matrix of the remaining observations. When k = 1, lowercase notation emphasizes that single points are removed, such as $\mb {A}_{(u)}$.

Managing the Covariance Parameters

An important component of influence diagnostics in the mixed model is the estimated variance-covariance matrix $\mb {V} = \mb {Z}\mb {G}\mb {Z}’ + \mb {R}$. To make the dependence on the vector of covariance parameters explicit, write it as $\mb {V}(\btheta )$. If one parameter, $\sigma ^2$, is profiled or factored out of $\mb {V}$, the remaining parameters are denoted as $\btheta ^*$. Notice that in a model where $\mb {G}$ is diagonal and $\mb {R} = \sigma ^2\mb {I}$, the parameter vector $\btheta ^*$ contains the ratios of each variance component and $\sigma ^2$ (see Wolfinger, Tobias, and Sall 1994). When ITER=0, two scenarios are distinguished:

  1. If the residual variance is not profiled, either because the model does not contain a residual variance or because it is part of the Newton-Raphson iterations, then $\widehat{\btheta }_{(U)} \equiv \widehat{\btheta }$.

  2. If the residual variance is profiled, then $\widehat{\btheta }^*_{(U)} \equiv \widehat{\btheta }^*$ and $\widehat{\sigma }^2_{(U)} \not= \widehat{\sigma }^2$. Influence statistics such as Cook’s D and internally studentized residuals are based on $\mb {V}({\widehat{\btheta }})$, whereas externally studentized residuals and the DFFITS statistic are based on $\mb {V}(\widehat{\btheta }_ U) = \sigma ^2_{(U)}\mb {V}(\widehat{\btheta }^*)$. In a random components model with uncorrelated errors, for example, the computation of $\mb {V}(\widehat{\btheta }_ U)$ involves scaling of $\widehat{\mb {G}}$ and $\widehat{\mb {R}}$ by the full-data estimate $\widehat{\sigma }^2$ and multiplying the result with the reduced-data estimate $\widehat{\sigma }^2_{(U)}$.

Certain statistics, such as MDFFITS, CovRatio, and CovTrace, require an estimate of the variance of the fixed effects that is based on the reduced number of observations. For example, $\mb {V}(\widehat{\btheta }_ U)$ is evaluated at the reduced-data parameter estimates but computed for the entire data set. The matrix $\mb {V}_{(U)}(\widehat{\btheta }_{(U)})$, on the other hand, has rows and columns corresponding to the points in U removed. The resulting matrix is evaluated at the delete-case estimates.

When influence analysis is iterative, the entire vector $\btheta $ is updated, whether the residual variance is profiled or not. The matrices to be distinguished here are $\mb {V}(\widehat{\btheta })$, $\mb {V}(\widehat{\btheta }_{(U)})$, and $\mb {V}_{(U)}(\widehat{\btheta }_{(U)})$, with unambiguous notation.

Predicted Values, PRESS Residual, and PRESS Statistic

An unconditional predicted value is $\widehat{y}_ i = \mb {x}_ i’\widehat{\bbeta }$, where the vector $\mb {x}_ i$ is the ith row of $\mb {X}$. The (raw) residual is given as $\widehat{\epsilon }_ i = y_ i - \widehat{y}_ i$, and the PRESS residual is

\[  \widehat{\epsilon }_{i(U)} = y_ i - \mb {x}_ i’\widehat{\bbeta }_{(U)}  \]

The PRESS statistic is the sum of the squared PRESS residuals,

\[  \mathit{PRESS} = \sum _{i \in U} \widehat{\epsilon }_{i(U)}^{\, 2}  \]

where the sum is over the observations in U.

If EFFECT=, SIZE=, or KEEP= is not specified, PROC MIXED computes the PRESS residual for each observation selected through SELECT= (or all observations if SELECT= is not given). If EFFECT=, SIZE=, or KEEP= is specified, the procedure computes PRESS.


For the general mixed model, leverage can be defined through the projection matrix that results from a transformation of the model with the inverse of the Cholesky decomposition of $\mb {V}$, or through an oblique projector. The MIXED procedure follows the latter path in the computation of influence diagnostics. The leverage value reported for the ith observation is the ith diagonal entry of the matrix

\[  \bH = \mb {X}(\mb {X}’\mb {V}(\widehat{\btheta })^{-1}\mb {X})^{-} \mb {X}’\mb {V}(\widehat{\btheta })^{-1}  \]

which is the weight of the observation in contributing to its own predicted value, $\mb {H} = d \widehat{\mb {Y}} / d \mb {Y}$.

While $\mb {H}$ is idempotent, it is generally not symmetric and thus not a projection matrix in the narrow sense.

The properties of these leverages are generalizations of the properties in models with diagonal variance-covariance matrices. For example, $\widehat{\mb {Y}} = \mb {H}\mb {Y}$, and in a model with intercept and $\mb {V} = \sigma ^2 \mb {I}$, the leverage values

\[  h_{ii} = \mb {x}_ i’(\mb {X}’\mb {X})^{-}\mb {x}_ i  \]

are $h^ l_{ii}=1/n \leq h_{ii} \leq 1=h^ u_{ii}$ and $\sum _{i=1}^{n} h_{ii} = \mr {rank}(\mb {X})$. The lower bound for $h_{ii}$ is achieved in an intercept-only model, and the upper bound is achieved in a saturated model. The trace of $\mb {H}$ equals the rank of $\mb {X}$.

If $\nu _{ij}$ denotes the element in row i, column j of $\mb {V}^{-1}$, then for a model containing only an intercept the diagonal elements of $\mb {H}$ are

\[  h_{ii} = \frac{\sum _{j=1}^{n} \nu _{ij}}{\sum _{i=1}^{n}\, \sum _{j=1}^{n} \nu _{ij}}  \]

Because $\sum _{j=1}^ n \nu _{ij}$ is a sum of elements in the ith row of the inverse variance-covariance matrix, $h_{ii}$ can be negative, even if the correlations among data points are nonnegative. In case of a saturated model with $\mb {X} = \mb {I}$, $h_{ii} = 1.0$.

Internally and Externally Studentized Residuals

See the section Residual Diagnostics for the distinction between standardization, studentization, and scaling of residuals. Internally studentized marginal and conditional residuals are computed with the RESIDUAL option of the MODEL statement. The INFLUENCE option computes internally and externally studentized marginal residuals.

The computation of internally studentized residuals relies on the diagonal entries of $\mb {V}(\widehat{\btheta }) - \mb {Q}(\widehat{\btheta })$, where $\mb {Q}(\widehat{\btheta }) = \mb {X}(\mb {X}’\mb {V}(\widehat{\btheta })^{-1}\mb {X})^{-}\mb {X}’$. Externally studentized residuals require iterative influence analysis or a profiled residual variance. In the former case the studentization is based on $\mb {V}(\widehat{\btheta }_ U)$; in the latter case it is based on $\sigma ^2_{(U)}\mb {V}(\widehat{\btheta }^*)$.

Cook’s D

Cook’s D statistic is an invariant norm that measures the influence of observations in U on a vector of parameter estimates (Cook, 1977). In case of the fixed-effects coefficients, let

\[  \bdelta _{(U)} = \widehat{\bbeta } - \widehat{\bbeta }_{(U)}  \]

Then the MIXED procedure computes

\[  D(\bbeta ) = \bdelta _{(U)}’ \widehat{\mr {Var}}[\widehat{\bbeta }]^{-} \bdelta _{(U)} / \mr {rank}(\mb {X})  \]

where $\widehat{\mr {Var}}[\widehat{\bbeta }]^{-}$ is the matrix that results from sweeping $(\mb {X}’\mb {V}(\widehat{\btheta })^{-1}\mb {X})^{-}$.

If $\mb {V}$ is known, Cook’s D can be calibrated according to a chi-square distribution with degrees of freedom equal to the rank of $\mb {X}$ (Christensen, Pearson, and Johnson, 1992). For estimated $\mb {V}$ the calibration can be carried out according to an $F(\mr {rank}(\mb {X}),n-\mr {rank}(\mb {X}))$ distribution. To interpret D on a familiar scale, Cook (1979) and Cook and Weisberg (1982, p. 116) refer to the 50th percentile of the reference distribution. If D is equal to that percentile, then removing the points in U moves the fixed-effects coefficient vector from the center of the confidence region to the 50% confidence ellipsoid (Myers, 1990, p. 262).

In the case of iterative influence analysis, the MIXED procedure also computes a D-type statistic for the covariance parameters. If $\bGamma $ is the asymptotic variance-covariance matrix of $\widehat{\btheta }$, then MIXED computes

\[  D_{\btheta } = (\widehat{\btheta } - \widehat{\btheta }_{(U))})’ \widehat{\bGamma }^{-1} (\widehat{\btheta } - \widehat{\btheta }_{(U)})  \]

A DFFIT measures the change in predicted values due to removal of data points. If this change is standardized by the externally estimated standard error of the predicted value in the full data, the DFFITS statistic of Belsley, Kuh, and Welsch (1980, p. 15) results:

\[  \mr {DFFITS}_ i = (\widehat{y}_ i - \widehat{y}_{i(u)} ) / \mr {ese}(\widehat{y}_ i)  \]

The MIXED procedure computes DFFITS when the EFFECT= or SIZE= modifier of the INFLUENCE option is not in effect. In general, an external estimate of the estimated standard error is used. When ITER > 0, the estimate is

\[  \mr {ese}(\widehat{y}_ i) = \sqrt { \mb {x}_ i’(\mb {X}’ \mb {V}(\widehat{\btheta }_{(u)})^{-}\mb {X})^{-1}\mb {x}_ i }  \]

When ITER=0 and $\sigma ^2$ is profiled, then

\[  \mr {ese}(\widehat{y}_ i) = \widehat{\sigma }_{(u)} \sqrt { \mb {x}_ i’(\mb {X}’ \mb {V}(\widehat{\btheta }^*)^{-1}\mb {X})^{-}\mb {x}_ i}  \]

When the EFFECT=, SIZE=, or KEEP= modifier is specified, the MIXED procedure computes a multivariate version suitable for the deletion of multiple data points. The statistic, termed MDFFITS after the MDFFIT statistic of Belsley, Kuh, and Welsch (1980, p. 32), is closely related to Cook’s D. Consider the case $\mb {V} = \sigma ^2\mb {V}(\btheta ^*)$ so that

\[  \mr {Var}[\widehat{\bbeta }] = \sigma ^2 (\mb {X}’ \mb {V}(\btheta ^*)^{-1}\mb {X})^{-}  \]

and let $\widetilde{\mr {Var}}[\widehat{\bbeta }_{(U)}]$ be an estimate of $\mr {Var}[\widehat{\bbeta }_{(U)}]$ that does not use the observations in U. The MDFFITS statistic is then computed as

\[  \mr {MDFFITS}(\bbeta ) = \bdelta _{(U)}’\widetilde{\mr {Var}}[\widehat{\bbeta }_{(U)}]^{-} \bdelta _{(U)} / \mr {rank}(\mb {X})  \]

If ITER=0 and $\sigma ^2$ is profiled, then $\widetilde{\mr {Var}}[\widehat{\bbeta }_{(U)}]^{-}$ is obtained by sweeping

\[  \widehat{\sigma }^2_{(U)} (\mb {X}_{(U)}’\mb {V}_{(U)}(\widehat{\btheta }^*)^{-}\mb {X}_{(U)})^{-}  \]

The underlying idea is that if $\btheta ^*$ were known, then

\[  (\mb {X}_{(U)}’{\mb {V}_{(U)}(\btheta ^*)}^{-1}\mb {X}_{(U)})^{-}  \]

would be $\mr {Var}[\widehat{\bbeta }]/\sigma ^2$ in a generalized least squares regression with all but the data in U.

In the case of iterative influence analysis, $\widetilde{\mr {Var}}[\widehat{\bbeta }_{(U)}]$ is evaluated at $\widehat{\btheta }_{(U)}$. Furthermore, a MDFFITS-type statistic is then computed for the covariance parameters:

\[  \mr {MDFFITS}(\btheta ) = (\widehat{\btheta } - \widehat{\btheta }_{(U)})’ \widehat{\mr {Var}}[\widehat{\btheta }_{(U)}]^{-1} (\widehat{\btheta } - \widehat{\btheta }_{(U)})  \]
Covariance Ratio and Trace

These statistics depend on the availability of an external estimate of $\mb {V}$, or at least of $\sigma ^2$. Whereas Cook’s D and MDFFITS measure the impact of data points on a vector of parameter estimates, the covariance-based statistics measure impact on their precision. Following Christensen, Pearson, and Johnson (1992), the MIXED procedure computes

$\displaystyle  \mr {CovTrace}(\bbeta )  $
$\displaystyle = | \textup{trace} ( \widehat{\mr {Var}}[\widehat{\bbeta }]^{-} \, \,  \widetilde{\mr {Var}}[\widehat{\bbeta }_{(U)}]) - \mr {rank}(\mb {X}) |  $
$\displaystyle \mr {CovRatio}(\bbeta )  $
$\displaystyle = \frac{\mr {det}_{ns} (\widetilde{\mr {Var}}[\widehat{\bbeta }_{(U)}]) }{\mr {det}_{ns} (\widehat{\mr {Var}}[\widehat{\bbeta }]) }  $

where $\mr {det}_{ns}(\mb {M})$ denotes the determinant of the nonsingular part of matrix $\mb {M}$.

In the case of iterative influence analysis these statistics are also computed for the covariance parameter estimates. If q denotes the rank of $\mr {Var}[\widehat{\btheta }]$, then

$\displaystyle  \mr {CovTrace}(\btheta )  $
$\displaystyle = | \textup{trace}( \widehat{\mr {Var}}[\widehat{\btheta }]^{-} \, \,  \widehat{\mr {Var}}[\widehat{\btheta }_{(U)}]) - q |  $
$\displaystyle \mr {CovRatio}(\btheta )  $
$\displaystyle = \frac{\mr {det}_{ns} (\widehat{\mr {Var}}[\widehat{\btheta }_{(U)}]) }{\mr {det}_{ns} (\widehat{\mr {Var}}[\widehat{\btheta }]) }  $
Likelihood Distances

The log-likelihood function l and restricted log-likelihood function $l_ R$ of the linear mixed model are given in the section Estimating Covariance Parameters in the Mixed Model. Denote as $\bpsi $ the collection of all parameters, i.e., the fixed effects $\bbeta $ and the covariance parameters $\btheta $. Twice the difference between the (restricted) log-likelihood evaluated at the full-data estimates $\widehat{\bpsi }$ and at the reduced-data estimates $\widehat{\bpsi }_{(U)}$ is known as the (restricted) likelihood distance:

$\displaystyle  \mr {RLD}_{(U)}  $
$\displaystyle = 2\{  l_ R(\widehat{\bpsi }) - l_ R(\widehat{\bpsi }_{(U)}) \}   $
$\displaystyle \mr {LD}_{(U)}  $
$\displaystyle = 2\{  l(\widehat{\bpsi }) - l(\widehat{\bpsi }_{(U)}) \}   $

Cook and Weisberg (1982, Ch. 5.2) refer to these differences as likelihood distances, Beckman, Nachtsheim, and Cook (1987) call the measures likelihood displacements. If the number of elements in $\bpsi $ that are subject to updating following point removal is q, then likelihood displacements can be compared against cutoffs from a chi-square distribution with q degrees of freedom. Notice that this reference distribution does not depend on the number of observations removed from the analysis, but rather on the number of model parameters that are updated. The likelihood displacement gives twice the amount by which the log likelihood of the full data changes if one were to use an estimate based on fewer data points. It is thus a global, summary measure of the influence of the observations in U jointly on all parameters.

Unless METHOD=ML, the MIXED procedure computes the likelihood displacement based on the residual (=restricted) log likelihood, even if METHOD=MIVQUE0 or METHOD=TYPE1, TYPE2, or TYPE3.

Noniterative Update Formulas

Update formulas that do not require refitting of the model are available for the cases where $\mb {V} = \sigma ^2\mb {I}$, $\mb {V}$ is known, or $\mb {V}^*$ is known. When ITER=0 and these update formulas can be invoked, the MIXED procedure uses the computational devices that are outlined in the following paragraphs. It is then assumed that the variance-covariance matrix of the fixed effects has the form $(\mb {X}’\mb {V}^{-1}\mb {X})^{-}$. When DDFM=KENWARDROGER, this is not the case; the estimated variance-covariance matrix is then inflated to better represent the uncertainty in the estimated covariance parameters. Influence statistics when DDFM=KENWARDROGER should iteratively update the covariance parameters (ITER > 0). The dependence of $\mb {V}$ on $\btheta $ is suppressed in the sequel for brevity.

Updating the Fixed Effects

Denote by $\mb {U}$ the $(n \times k)$ matrix that is assembled from k columns of the identity matrix. Each column of $\mb {U}$ corresponds to the removal of one data point. The point being targeted by the ith column of $\mb {U}$ corresponds to the row in which a 1 appears. Furthermore, define

$\displaystyle  \bOmega  $
$\displaystyle = (\mb {X}’\mb {V}^{-1}\mb {X})^{-}  $
$\displaystyle \mb {Q}  $
$\displaystyle = \mb {X}\bOmega \mb {X}’  $
$\displaystyle \mb {P}  $
$\displaystyle = \mb {V}^{-1}(\mb {V} - \mb {Q})\mb {V}^{-1}  $

The change in the fixed-effects estimates following removal of the observations in U is

\[  \widehat{\bbeta } - \widehat{\bbeta }_{(U)} = \bOmega \mb {X}’\mb {V}^{-1}\mb {U} (\mb {U}’\mb {P}\mb {U})^{-1}\mb {U}’\mb {V}^{-1} (\mb {y}-\mb {X}\widehat{\bbeta })  \]

Using results in Cook and Weisberg (1982, A2) you can further compute

\[  \widetilde{\bOmega } = (\mb {X}_{(U)}’\mb {V}_{(U)}^{-1}\mb {X}_{(U)})^{-} = \bOmega + \bOmega \mb {X}’\mb {V}^{-1}\mb {U} (\mb {U}’\mb {P}\mb {U})^{-1} \mb {U}’\mb {V}^{-1}\mb {X}\bOmega  \]

If $\mb {X}$ is $(n \times p)$ of rank $m < p$, then $\bOmega $ is deficient in rank and the MIXED procedure computes needed quantities in $\widetilde{\bOmega }$ by sweeping (Goodnight, 1979). If the rank of the $(k \times k)$ matrix $\mb {U}’\bP \mb {U}$ is less than k, the removal of the observations introduces a new singularity, whether $\mb {X}$ is of full rank or not. The solution vectors $\widehat{\bbeta }$ and $\widehat{\bbeta }_{(U)}$ then do not have the same expected values and should not be compared. When the MIXED procedure encounters this situation, influence diagnostics that depend on the choice of generalized inverse are not computed. The procedure also monitors the singularity criteria when sweeping the rows of $(\mb {X}’ \mb {V}^{-1} \mb {X})^{-}$ and of $(\mb {X}_{(U)}’ \mb {V}_{(U)} ^{-1}\mb {X}_{(U)})^{-}$. If a new singularity is encountered or a former singularity disappears, no influence statistics are computed.

Residual Variance

When $\sigma ^2$ is profiled out of the marginal variance-covariance matrix, a closed-form estimate of $\sigma ^2$ that is based on only the remaining observations can be computed provided $\mb {V}^* = \mb {V}(\widehat{\btheta }^*)$ is known. Hurtado (1993, Thm. 5.2) shows that

\[  (n-q-r)\widehat{\sigma }^2_{(U)} = (n-q)\widehat{\sigma }^2 - \widehat{\bepsilon }_ U’(\widehat{\sigma }^2 \mb {U}’\bP \mb {U})^{-1}\widehat{\bepsilon }_ U  \]

and $\widehat{\bepsilon }_ U = \mb {U}’{\mb {V}^*}^{-1}(\mb {y} - \mb {X}\widehat{\bbeta })$. In the case of maximum likelihood estimation q = 0 and for REML estimation $q=\mr {rank}(\mb {X})$. The constant r equals the rank of $(\mb {U}’\mb {PU})$ for REML estimation and the number of effective observations that are removed if METHOD=ML.

Likelihood Distances

For noniterative methods the following computational devices are used to compute (restricted) likelihood distances provided that the residual variance $\sigma ^2$ is profiled.

The log likelihood function $l(\widehat{\btheta })$ evaluated at the full-data and reduced-data estimates can be written as

$\displaystyle  l(\widehat{\bpsi })  $
$\displaystyle = -\frac{n}{2}\log (\widehat{\sigma }^2) - \frac12\log |\mb {V}^*| - \frac12 (\mb {y}-\mb {X}\widehat{\bbeta })’{\mb {V}^*}^{-1} (\mb {y}-\mb {X}\widehat{\bbeta }) / \widehat{\sigma }^2 - \frac{n}{2}\log (2\pi )  $
$\displaystyle l(\widehat{\bpsi }_{(U)})  $
$\displaystyle = -\frac{n}{2}\log (\widehat{\sigma }^2_{(U)}) -\frac12 \log |\mb {V}^*| - \frac12 (\mb {y}-\mb {X}\widehat{\bbeta }_{(U)})’{\mb {V}^*}^{-1} (\mb {y}-\mb {X}\widehat{\bbeta }_{(U)}) / \widehat{\sigma }^2_{(U)} - \frac{n}{2}\log (2\pi )  $

Notice that $l(\widehat{\btheta }_{(U)})$ evaluates the log likelihood for n data points at the reduced-data estimates. It is not the log likelihood obtained by fitting the model to the reduced data. The likelihood distance is then

\[  \mr {LD}_{(U)} = n\log \left\{  \frac{\widehat{\sigma }^2_{(U)}}{\widehat{\sigma }^2} \right\}  - n + \left(\mb {y}-\mb {X}\widehat{\bbeta }_{(U)}\right)’{\mb {V}^*}^{-1} \left(\mb {y}-\mb {X}\widehat{\bbeta }_{(U)}\right) / \widehat{\sigma }^2_{(U)}  \]

Expressions for $\mr {RLD}_{(U)}$ in noniterative influence analysis are derived along the same lines.