The HPCOUNTREG Procedure

Conway-Maxwell-Poisson Regression

The Conway-Maxwell-Poisson (CMP) distribution is a generalization of the Poisson distribution that enables you to model both underdispersed and overdispersed data. It was originally proposed by Conway and Maxwell (1962), but its implementation to model under- and overdispersed count data is attributed to Shmueli et al. (2005).

Recall that $y_{i}$, given the vector of covariates $\mathbf{x}_{i}$, is independently Poisson-distributed as

\[ P(Y_{i}=y_{i}|\mathbf{x}_{i}) = \frac{e^{-\lambda _{i}} \lambda _{i}^{y_{i}}}{y_{i}!}, \quad y_{i} = 0,1,2,\ldots \]

The Conway-Maxwell-Poisson distribution is defined as

\[ P(Y_{i}=y_{i}|\mathbf{x}_{i},\mathbf{z}_{i}) = \frac{1}{Z(\lambda _{i},\nu _{i})} \frac{\lambda _{i}^{y_{i}}}{(y_{i}!)^{\nu _{i}}}, \quad y_ i = 0,1,2,\ldots \]

where the normalization factor is

\[ Z(\lambda _ i,\nu _ i) = \sum _{n=0}^{\infty }\frac{\lambda _{i}^{n}}{(n!)^{\nu _{i}}} \]

and

\[ \lambda _{i} = \exp (\mathbf{x}_{i}^{\prime } \bbeta ) \]
\[ \nu _{i}=-\exp (\mathbf{g}_{i}^{\prime } \delta ) \]

The $\bbeta $ vector is a $(k+1) \times 1$ parameter vector. (The intercept is $\beta _0$, and the coefficients for the k regressors are $\beta _1, \ldots , \beta _ k$.) The $\delta $ vector is an $(m+1) \times 1$ parameter vector. (The intercept is represented by $\delta _0$, and the coefficients for the m regressors are $\delta _1, \ldots , \delta _ k$.) The covariates are represented by $x_{i}$ and $g_{i}$ vectors.

One of the restrictive properties of the Poisson model is that the conditional mean and variance must be equal:

\[ E(y_{i}|\mathbf{x}_{i}) = V(y_{i}|\mathbf{x}_{i}) = \lambda _{i} = \exp (\mathbf{x}_{i}^{\prime } \bbeta ) \]

The CMP distribution overcomes this restriction by defining an additional parameter, $\nu $, which governs the rate of decay of successive ratios of probabilities such that

\[ P(Y_ i=y_ i-1)/P(Y_ i=y_ i)=\frac{(y_ i)^{\nu _ i}}{\lambda _ i} \]

The introduction of the additional parameter, $\nu $, allows for flexibility in modeling the tail behavior of the distribution. If $\nu =1$, the ratio is equal to the rate of decay of the Poisson distribution. If $\nu <1$, the rate of decay decreases, enabling you to model processes that have longer tails than the Poisson distribution (overdispersed data). If $\nu >1$, the rate of decay increases in a nonlinear fashion, thus shortening the tail of the distribution (underdispersed data).

There are several special cases of the Conway-Maxwell-Poisson distribution. If $\lambda <1$ and $\nu \rightarrow \infty $, the Conway-Maxwell-Poisson results in the Bernoulli distribution. In this case, the data can take only the values 0 and 1, which represents an extreme underdispersion. If $\nu =1$, the Poisson distribution is recovered with its equidispersion property. When $\nu =0$ and $\lambda <1$, the normalization factor is convergent and forms a geometric series,

\[ Z(\lambda _ i,0)=\frac{1}{1-\lambda _ i} \]

and the probability density function becomes

\[ P(Y=y_{i};\lambda _ i,\nu _ i=0)={(1-\lambda _ i)}{\lambda _ i^{y_{i}}} \]

The geometric distribution represents a case of severe overdispersion.

Mean, Variance, and Dispersion for the Conway-Maxwell-Poisson Model

The mean and the variance of the Conway-Maxwell-Poisson distribution are defined as

\[ E[Y]=\frac{\partial \ln Z}{\partial \ln \lambda } \]
\[ V[Y]=\frac{\partial ^2 \ln Z}{\partial ^2 \ln \lambda } \]

The Conway-Maxwell-Poisson distribution does not have closed-form expressions for its moments in terms of its parameters $\lambda $ and $\nu $. However, the moments can be approximated. Shmueli et al. (2005) use asymptotic expressions for Z to derive $E(Y)$ and $V(Y)$ as

\[ E[Y] \approx \lambda ^{1/{\nu }}+\frac{1}{2\nu }-\frac{1}{2} \]
\[ V[Y] \approx \frac{1}{\nu }\lambda ^{1/{\nu }} \]

In the Conway-Maxwell-Poisson model, the summation of infinite series is evaluated using a logarithmic expansion. The mean and variance are calculated as follows for the Shmueli et al. (2005) model:

\[ E(Y)=\frac{1}{Z(\lambda ,\nu )} \sum _{j=0}^{\infty }\frac{j \lambda ^{j}}{(j!)^{\nu }} \]
\[ V(Y)=\frac{1}{Z(\lambda ,\nu )} \sum _{j=0}^{\infty }\frac{j^{2} \lambda ^{j}}{(j!)^{\nu }}-E(Y)^2 \]

The dispersion is defined as

\[ D(Y)= \frac{V(Y)}{E(Y)} \]

Likelihood Function for the Conway-Maxwell-Poisson Model

The likelihood for a set of n independently and identically distributed variables $y_1, y_2, \ldots , y_ n$ is written as

\begin{eqnarray*} L(y_1, y_2, \ldots , y_ n|\lambda , \nu ) & = & \frac{{\prod ^ n_{i=1}}{\lambda }^{y_ i}}{(\prod ^ n_{i=1}{y_ i!})^\nu } Z(\lambda ,\nu )^{-n} \\ & = & \lambda ^{\sum ^ n_{i=1} {y_ i}}\exp {({-\nu \sum ^ n_{i=1}\ln (y_ i!)})}Z(\lambda ,\nu )^{-n} \\ & = & \lambda ^{S_1}\exp {(-\nu S_2)} Z(\lambda ,\nu )^{-n} \end{eqnarray*}

where $S_1$ and $S_2$ are sufficient statistics for $y_1, y_2, \ldots , y_ n$. You can see from the preceding equation that the Conway-Maxwell-Poisson distribution is a member of the exponential family. The log-likelihood function can be written as

\[ \mathcal{L}=-n\ln (Z(\lambda ,\nu ))+\sum _{i=1}^{n}\left(y_{i}\ln (\lambda )-\nu \ln (y_ i!)\right) \]

The gradients can be written as

\[ \mathcal{L}_{\beta } = \left(\sum _{k=1}^{N}y_{k} - n\frac{\lambda {Z(\lambda ,\nu )}_{\lambda }}{Z(\lambda ,\nu )}\right) \mathbf{x} \]
\[ \mathcal{L}_{\delta } = \left(\sum _{k=1}^{N}\ln (y_{k}!) - n\frac{{Z(\lambda ,\nu )}_{\nu }}{Z(\lambda ,\nu )}\right) \nu \mathbf{z} \]

Conway-Maxwell-Poisson Regression: Guikema and Coffelt (2008) Reparameterization

Guikema and Coffelt (2008) propose a reparameterization of the Shmueli et al. (2005) Conway-Maxwell-Poisson model to provide a measure of central tendency that can be interpreted in the context of the generalized linear model. By substituting $\lambda =\mu ^{\nu }$, the Guikema and Coffelt (2008) formulation is written as

\[ P(Y=y_{i}; \mu ,\nu )=\frac{1}{S(\mu ,\nu )}\left(\frac{\mu ^{y_{i}}}{y_{i}!}\right)^{\nu } \]

where the new normalization factor is defined as

\[ S(\mu ,\nu )= \sum _{j=0}^{\infty }\left(\frac{\mu ^{j}}{j!}\right)^{\nu } \]

In terms of their new formulations, the mean and variance of Y are given as

\[ E[Y]=\frac{1}{\nu }\frac{\partial \ln S}{\partial \ln \mu } \]
\[ V[Y]=\frac{1}{\nu ^2}\frac{\partial ^2 \ln S}{\partial ^2 \ln \mu } \]

They can be approximated as

\[ E[Y] \approx \mu +\frac{1}{2}\nu -\frac{1}{2} \]
\[ V[Y] \approx \frac{\mu }{\nu } \]

In the HPCOUNTREG procedure, the mean and variance are calculated according to the following formulas, respectively, for the Guikema and Coffelt (2008) model:

\[ E(Y)=\frac{1}{Z(\lambda ,\mu )} \sum _{j=0}^{\infty }\frac{j \mu ^{{\nu }j}}{(j!)^{\nu }} \]
\[ V(Y)=\frac{1}{Z(\lambda ,\mu )} \sum _{j=0}^{\infty }\frac{j^{2} \mu ^{{\nu }j}}{(j!)^{\nu }}-E(Y)^2 \]

In terms of the new parameter $\mu $, the log-likelihood function is specified as

\[ \mathcal{L}=\ln (S(\mu ,\nu ))+\nu \sum _{i=1}^{N}\left(y_{i}\ln (\mu )-\ln (y_ i!)\right) \]

and the gradients are calculated as

\[ \mathcal{L}_{\beta } = \left(\nu \sum _{i=1}^{N}y_{i} - \frac{\mu {S(\mu ,\nu )}_{\mu }}{S(\mu ,\nu )}\right)\mathbf{x} \]
\[ \mathcal{L}_{\delta } = \left(\sum _{i=1}^{N}\left(y_{i} \ln (\mu ) - \ln (y_{i}!)\right) - \frac{{S(\mu ,\nu )}_{\nu }}{S(\mu ,\nu )}\right) \nu \mathbf{g} \]

By default, the HPCOUNTREG procedure uses the Guikema and Coffelt (2008) specification. The Shmueli et al. (2005) model can be estimated by specifying the PARAMETER=LAMBDA option. If you specify DISP=CMPOISSON in the MODEL statement and you omit the DISPMODEL statement, the model is estimated according to the Lord, Guikema, and Geedipally (2008) specification, where $\nu $ represents a single parameter that does not depend on any covariates. The Lord, Guikema, and Geedipally (2008) specification makes the model comparable to the negative binomial model because it has only one parameter.

The dispersion is defined as

\[ D(Y)= \frac{V(Y)}{E(Y)} \]

Using the Guikema and Coffelt (2008) specification results in the integral part of $\mu $ representing the mode, which is a reasonable approximation for the mean. The dispersion can be written as

\[ D(Y)= \frac{V(Y)}{E(Y)} \approx \frac{\frac{\mu }{\nu }}{\mu +\frac{1}{2}\nu -\frac{1}{2}} \approx \frac{1}{v} \]

When $\nu $ < 1, the variance can be shown to be greater than the mean and the dispersion greater than 1. This is a result of overdispersed data. When $\nu $ = 1 and the mean and variance are equal, the dispersion is equal to 1 (Poisson model). When $\nu $ > 1, the variance is smaller than the mean and the dispersion is less than 1. This is a result of underdispersed data.

All Conway-Maxwell-Poisson models in the HPCOUNTREG procedure are parameterized in terms of dispersion, where

\[ -\ln (\nu )=\delta _{0}+\sum _{n=1}^{q}\delta _{n} g_{n} \]

Negative values of $\ln (\nu )$ indicate that the data are approximately overdispersed, and positive values of $\ln (\nu )$ indicate that the data are approximately underdispersed.