The HPCOUNTREG Procedure

Poisson Regression

The most widely used model for count data analysis is Poisson regression. Poisson regression assumes that $y_{i}$, given the vector of covariates $\mathbf{x}_{i}$, is independently Poisson distributed with

\[  P(Y_{i}=y_{i}|\mathbf{x}_{i}) = \frac{e^{-\mu _{i}}\mu _{i}^{y_{i}}}{y_{i}!}, \quad y_ i = 0,1,2,\ldots  \]

and the mean parameter—that is, the mean number of events per period—is given by

\[  \mu _{i} = \exp (\mathbf{x}_{i}^{\prime } \bbeta )  \]

where $\bbeta $ is a $(k+1) \times 1$ parameter vector. (The intercept is $\beta _0$; the coefficients for the $k$ regressors are $\beta _1, \ldots , \beta _ k$.) Taking the exponential of $\mathbf{x}_{i}’\bbeta $ ensures that the mean parameter $\mu _{i}$ is nonnegative. It can be shown that the conditional mean is given by

\[  E(y_{i}|\mathbf{x}_{i}) = \mu _{i} = \exp (\mathbf{x}_{i}^{\prime } \bbeta )  \]

The name log-linear model is also used for the Poisson regression model because the logarithm of the conditional mean is linear in the parameters:

\[  \ln [E(y_{i}|\mathbf{x}_{i})] = \ln (\mu _{i}) = \mathbf{x}_{i}^{\prime } \bbeta  \]

Note that the conditional variance of the count random variable is equal to the conditional mean in the Poisson regression model:

\[  V(y_{i}|\mathbf{x}_{i}) = E(y_{i}|\mathbf{x}_{i}) = \mu _{i}  \]

The equality of the conditional mean and variance of $y_{i}$ is known as equidispersion.

The marginal effect of a regressor is given by

\[  \frac{\partial E(y_{i}|\mathbf{x}_{i})}{\partial x_{ji}} = \exp (\mathbf{x}_{i}^{\prime } \bbeta ) \bbeta _{j} = E(y_{i}|\mathbf{x}_{i}) \beta _{j}  \]

Thus, a one-unit change in the $j$th regressor leads to a proportional change in the conditional mean $E(y_{i}|\mathbf{x}_{i})$ of $\beta _{j}$.

The standard estimator for the Poisson model is the maximum likelihood estimator (MLE). Because the observations are independent, the log-likelihood function is written as

\[  \mathcal{L} = \sum _{i=1}^{N}(-\mu _{i} + y_{i} \ln \mu _{i} - \ln y_{i}!) = \sum _{i=1}^{N}(-e^{\mathbf{x}_{i}^{\prime } \bbeta } + y_{i}\mathbf{x}_{i}^{\prime } \bbeta - \ln y_{i}!)  \]

The gradient and the Hessian, respectively, are as follows:

\[  \frac{\partial \mathcal{L}}{\partial \bbeta } = \sum _{i=1}^{N}(y_{i}-\mu _{i})\mathbf{x}_{i} = \sum _{i=1}^{N}(y_{i}-e^{\mathbf{x}_{i}^{\prime }\bbeta })\mathbf{x}_{i}  \]
\[  \frac{\partial ^2 \mathcal{L}}{\partial \bbeta \partial \bbeta ^{\prime }} = - \sum _{i=1}^{N}\mu _{i}\mathbf{x}_{i}{\mathbf{x}_{i}}^{\prime } = - \sum _{i=1}^{N} e^{\mathbf{x}_{i}^{\prime } \bbeta } \mathbf{x}_{i} \mathbf{x}_{i}^{\prime }  \]

The Poisson model has been criticized for its restrictive property that the conditional variance equals the conditional mean. Real-life data are often characterized by overdispersion—that is, the variance exceeds the mean. Allowing for overdispersion can improve model predictions because the Poisson restriction of equal mean and variance results in the underprediction of zeros when overdispersion exists. The most commonly used model that accounts for overdispersion is the negative binomial model.