The HPFMM Procedure

Prior Distributions

The following list displays the parameterization of prior distributions for situations in which the HPFMM procedure uses a conjugate sampler in mixture models without model effects and certain basic distributions (binary, binomial, exponential, Poisson, normal, and t). You specify the parameters a and b in the formulas below in the MUPRIORPARMS= and PHIPRIORPARMS= options in the BAYES statement in these models.

Beta$(a,b)$
\[ f(y) = \frac{\Gamma (a+b)}{\Gamma (a)\Gamma (b)} \, y^{a-1} \, (1-y)^{b-1} \]

where $a > 0$, $b > 0$. In this parameterization, the mean and variance of the distribution are $\mu = a/(a+b)$ and $\mu (1-\mu )/(a+b+1)$, respectively. The beta distribution is the prior distribution for the success probability in binary and binomial distributions when conjugate sampling is used.

Dirichlet$(a_1,\cdots ,a_ k)$
\[ f(\mb{y}) = \frac{\Gamma \left(\sum _{i=1}^ k a_ i\right)}{\prod _{i=1}^ k \, \Gamma (a_ i)} y_1^{a_1-1} \, \cdots \, y_ k^{a_ k-1} \]

where $\sum _{i=1}^ k y_ i = 1$ and the parameters $a_ i > 0$. If any $a_ i$ were zero, an improper density would result. The Dirichlet density is the prior distribution for the mixture probabilities. You can affect the choice of the $a_ i$ through the MIXPRIORPARMS option in the BAYES statement. If k=2, the Dirichlet is the same as the beta$(a,b)$ distribution.

Gamma$(a,b$)
\[ f(y) = \frac{b^ a}{\Gamma (a)} \, y^{a-1} \, \exp \{ -by\} \]

where $a > 0$, $b > 0$. In this parameterization, the mean and variance of the distribution are $\mu = a/b$ and $\mu /b$, respectively. The gamma distribution is the prior distribution for the mean parameter of the Poisson distribution when conjugate sampling is used.

Inverse gamma$(a,b)$
\[ f(y) = \frac{b^ a}{\Gamma (a)}\, y^{-a-1} \, \exp \{ -b/y\} \]

where $a > 0$, $b > 0$. In this parameterization, the mean and variance of the distribution are $\mu = b/(a-1)$ if $a > 1$ and $\mu ^2/(a-2)$ if $a > 2$, respectively. The inverse gamma distribution is the prior distribution for the mean parameter of the exponential distribution when conjugate sampling is used. It is also the prior distribution for the scale parameter $\phi $ in all models.

Multinomial$(1,\pi _1,\cdots ,\pi _ k)$
\[ f(\mb{y}) = \frac{1}{y_1!\cdots y_ k!} \pi _1^{y_1} \, \cdots \, \pi _ k^{y_ k} \]

where $\sum _{j=1}^ k y_ j = n$, $y_ j \geq 0$, $\sum _{j=1}^ k \pi _ j = 1$, and n is the number of observations included in the analysis. The multinomial density is the prior distribution for the mixture proportions. The mean and variance of $Y_ j$ are $\mu _ j = \pi _ j$ and $\mu _ j(1-\mu _ j)$, respectively.

Normal$(a,b)$
\[ f(y) = \frac{a}{\sqrt {2\pi b}} \, \exp \left\{ -\frac12 \frac{(y-a)^2}{b}\right\} \]

where $b > 0$. The mean and variance of the distribution are $\mu = a$ and b, respectively. The normal distribution is the prior distribution for the mean parameter of the normal and t distribution when conjugate sampling is used.

When a MODEL statement contains effects or if you specify the METROPOLIS option, the prior distribution for the regression parameters is multivariate normal, and you can specify the means and variances of the parameters in the BETAPRIORPARMS= option in the BAYES statement.