The MI Procedure

Monotone and FCS Predictive Mean Matching Methods

The predictive mean matching method is also an imputation method available for continuous variables. It is similar to the regression method except that for each missing value, it imputes a value randomly from a set of observed values whose predicted values are closest to the predicted value for the missing value from the simulated regression model (Heitjan and Little, 1991; Schenker and Taylor, 1996).

Following the description of the model in the section Monotone and FCS Regression Methods, the following steps are used to generate imputed values:

  1. New parameters $\bbeta _{*} = ({\beta }_{*0}, {\beta }_{*1}, \ldots , {\beta }_{*(k)})$ and ${\sigma }_{*j}^2$ are drawn from the posterior predictive distribution of the parameters. That is, they are simulated from $(\hat{\beta }_{0}, \hat{\beta }_{1}, \ldots , \hat{\beta }_{k})$, ${\sigma }_{j}^2$, and $\mb {V}_{j}$. The variance is drawn as

    \[  {\sigma }_{*j}^2 = \hat{\sigma }_{j}^2 (n_{j}-k-1) / g  \]

    where g is a ${\chi }_{n_{j}-k-1}^{2}$ random variate and $n_{j}$ is the number of nonmissing observations for $Y_{j}$. The regression coefficients are drawn as

    \[  \bbeta _{*} = \hat{\bbeta } + {\sigma }_{*j} \mb {V}_{hj}’ \mb {Z}  \]

    where $\mb {V}_{hj}$ is the upper triangular matrix in the Cholesky decomposition, $\mb {V}_{j} = \mb {V}_{hj}’ \mb {V}_{hj}$, and $\mb {Z}$ is a vector of $k+1$ independent random normal variates.

  2. For each missing value, a predicted value

    \[  y_{i*} = {\beta }_{*0} + {\beta }_{*1} \,  x_{1} + {\beta }_{*2} \,  x_{2} + \ldots + {\beta }_{*(k)} \,  x_{k}  \]

    is computed with the covariate values $x_{1}, x_{2}, \ldots , x_{k}$.

  3. A set of $k_{0}$ observations whose corresponding predicted values are closest to $y_{i*}$ is generated. You can specify $k_{0}$ with the K= option.

  4. The missing value is then replaced by a value drawn randomly from these $k_{0}$ observed values.

The predictive mean matching method requires the number of closest observations to be specified. A smaller $k_{0}$ tends to increase the correlation among the multiple imputations for the missing observation and results in a higher variability of point estimators in repeated sampling. On the other hand, a larger $k_{0}$ tends to lessen the effect from the imputation model and results in biased estimators (Schenker and Taylor, 1996, p. 430).

The predictive mean matching method ensures that imputed values are plausible; it might be more appropriate than the regression method if the normality assumption is violated (Horton and Lipsitz, 2001, p. 246).