The HPLMIXED Procedure

Estimating Covariance Parameters in the Mixed Model

Estimation is more difficult in the mixed model than in the general linear model. Not only do you have $\bbeta $ as in the general linear model, but you also have unknown parameters in $\bgamma $, $\mb{G}$, and $\mb{R}$ . Least squares is no longer the best method. Generalized least squares (GLS) is more appropriate, minimizing

\[  (\mb{y} - \mb{X}\bbeta )’\mb{V}^{-1}(\mb{y} - \mb{X}\bbeta )  \]

However, GLS requires knowledge of $\mb{V}$ and therefore knowledge of $\mb{G}$ and $\mb{R}$. Lacking such information, one approach is to use an estimated GLS, in which you insert some reasonable estimate for $\mb{V}$ into the minimization problem. The goal thus becomes finding a reasonable estimate of $\mb{G}$ and $\mb{R}$.

In many situations, the best approach is to use likelihood-based methods, exploiting the assumption that $\bgamma $ and $\bepsilon $ are normally distributed (Hartley and Rao, 1967; Patterson and Thompson, 1971; Harville, 1977; Laird and Ware, 1982; Jennrich and Schluchter, 1986). PROC HPLMIXED implements two likelihood-based methods: maximum likelihood (ML) and restricted (residual) maximum likelihood (REML). A favorable theoretical property of ML and REML is that they accommodate data that are missing at random (Rubin, 1976; Little, 1995).

PROC HPLMIXED constructs an objective function associated with ML or REML and maximizes it over all unknown parameters. Using calculus, it is possible to reduce this maximization problem to one over only the parameters in $\mb{G}$ and $\mb{R}$. The corresponding log-likelihood functions are as follows:

\begin{align*}  \mr{ML:} \; \; \; \; \; \;  l(\mb{G},\mb{R}) & = -\frac{1}{2} \log |\mb{V}| - \frac{1}{2} \Strong{r} ’\mb{V}^{-1}\Strong{r} - \frac{n}{2} \log (2 \pi ) \\ \mr{REML:} \; \; \;  l_ R(\mb{G},\mb{R}) & = -\frac{1}{2} \log |\mb{V}| - \frac{1}{2} \log |\mb{X}’\mb{V}^{-1}\mb{X}| - \frac{1}{2} \Strong{r} ’\mb{V}^{-1}\Strong{r} - \frac{n-p}{2} \log (2 \pi ) \end{align*}

where $\mb{r} = \mb{y} - \mb{X}(\bX ’\bV ^{-1}\bX )^{-}\bX ’\bV ^{-1}\mb{y}$ and p is the rank of $\bX $. By default, PROC HPLMIXED actually minimizes a normalized form of $-2$ times these functions by using a ridge-stabilized Newton-Raphson algorithm by default. Lindstrom and Bates (1988) provide reasons for preferring Newton-Raphson to the expectation-maximum (EM) algorithm described in Dempster, Laird, and Rubin (1977) and Laird, Lange, and Stram (1987), in addition to analytical details for implementing a QR-decomposition approach to the problem. Wolfinger, Tobias, and Sall (1994) present the sweep-based algorithms that are implemented in PROC HPLMIXED. You can change the optimization technique with the TECHNIQUE= option in the PROC HPLMIXED statement.

One advantage of using the Newton-Raphson algorithm is that the second derivative matrix of the objective function evaluated at the optima is available upon completion. Denoting this matrix $\bH $, the asymptotic theory of maximum likelihood (Serfling, 1980) shows that $2\bH ^{-1}$ is an asymptotic variance-covariance matrix of the estimated parameters of $\bG $ and $\bR $. Thus, tests and confidence intervals based on asymptotic normality can be obtained. However, these can be unreliable in small samples, especially for parameters such as variance components that have sampling distributions that tend to be skewed to the right.

If a residual variance $\sigma ^2$ is a part of your mixed model, it can usually be profiled out of the likelihood. This means solving analytically for the optimal $\sigma ^2$ and plugging this expression back into the likelihood formula (Wolfinger, Tobias, and Sall, 1994). This reduces the number of optimization parameters by 1 and can improve convergence properties. PROC HPLMIXED profiles the residual variance out of the log likelihood.