The HPSEVERITY Procedure

Predefined Distributions

PROC HPSEVERITY assumes the following model for the response variable $Y$

\[  Y \sim \mathcal{F}(\Theta )  \]

where $\mathcal{F}$ is a continuous probability distribution with parameters $\Theta $. The model hypothesizes that the observed response is generated from a stochastic process that is governed by the distribution $\mathcal{F}$. This model is usually referred to as the error model. Given a representative input sample of response variable values, PROC HPSEVERITY estimates the model parameters for any distribution $\mathcal{F}$ and computes the statistics of fit for each model. This enables you to find the distribution that is most likely to generate the observed sample.

A set of predefined distributions is provided with the HPSEVERITY procedure. A summary of the distributions is provided in Table 5.2. For each distribution, the table lists the name of the distribution that should be used in the DIST statement, the parameters of the distribution along with their bounds, and the mathematical expressions for the probability density function (PDF) and cumulative distribution function (CDF) of the distribution.

All the predefined distributions, except LOGN and TWEEDIE, are parameterized such that their first parameter is the scale parameter. For LOGN, the first parameter $\mu $ is a log-transformed scale parameter. TWEEDIE does not have a scale parameter. The presence of scale parameter or a log-transformed scale parameter enables you to use all of the predefined distributions, except TWEEDIE, as a candidate for estimating regression effects.

A distribution model is associated with each predefined distribution. You can also define your own distribution model, which is a set of functions and subroutines that you define by using the FCMP procedure. For more information, see the section Defining a Severity Distribution Model with the FCMP Procedure.

Table 5.2: Predefined HPSEVERITY Distributions

Name

Distribution

Parameters

PDF ($f$) and CDF ($F$)

BURR

Burr

$\theta > 0$, $\alpha > 0$,

$f(x) =$

$\frac{\alpha \gamma z^\gamma }{x(1 + z^\gamma )^{(\alpha +1)}}$

   

$\gamma > 0$

$F(x) =$

$1 - \left(\frac{1}{1 + z^\gamma }\right)^\alpha $

EXP

Exponential

$\theta > 0$

$f(x) =$

$\frac{1}{\theta } e^{-z}$

     

$F(x) =$

$1 - e^{-z}$

GAMMA

Gamma

$\theta > 0$, $\alpha > 0$

$f(x) =$

$\frac{z^\alpha e^{-z}}{x \Gamma (\alpha )}$

     

$F(x) =$

$\frac{\gamma (\alpha ,z)}{\Gamma (\alpha )}$

GPD

Generalized

$\theta > 0$, $\xi > 0$

$f(x) =$

$\frac{1}{\theta } \left(1 + \xi z\right)^{-1-1/\xi }$

 

Pareto

 

$F(x) =$

$1 - \left(1 + \xi z\right)^{-1/\xi }$

IGAUSS

Inverse Gaussian

$\theta > 0$, $\alpha > 0$

$f(x) =$

$\frac{1}{\theta } \sqrt {\frac{\alpha }{2 \pi z^3}} \:  e^{\frac{- \alpha (z-1)^2}{2z}}$

 

(Wald)

 

$F(x) =$

$\Phi \left((z-1)\sqrt {\frac{\alpha }{z}}\right) +$

       

$\Phi \left(-(z+1)\sqrt {\frac{\alpha }{z}}\right)\  e^{2\alpha }$

LOGN

Lognormal

$\mu $ (no bounds),

$f(x) =$

$\frac{1}{x \sigma \sqrt {2 \pi }} e^{-\frac{1}{2}\left(\frac{\log (x) - \mu }{\sigma }\right)^2}$

   

$\sigma > 0$

$F(x) =$

$\Phi \left(\frac{\log (x) - \mu }{\sigma }\right)$

PARETO

Pareto

$\theta > 0$, $\alpha > 0$

$f(x) =$

$\frac{\alpha \theta ^\alpha }{(x + \theta )^{\alpha +1}}$

     

$F(x) =$

$1 - \left(\frac{\theta }{x + \theta }\right)^\alpha $

TWEEDIE

Tweedie$^6$

$\mu > 0$, $\phi > 0$,

$f(x) =$

$a(x,\phi ) \exp \left[ \frac{1}{\phi } \left( \frac{x \mu ^{1-p}}{1-p} - \kappa (\mu ,p) \right) \right]$

   

$p > 1$

$F(x) =$

$\int _{0}^{x} f(t) dt$

STWEEDIE

Scaled Tweedie$^6$

$\theta > 0$, $\lambda > 0$,

$f(x) =$

$a(x,\theta ,\lambda ,p) \exp \left( - \frac{x}{\theta } - \lambda \right)$

   

$1 < p < 2$

$F(x) =$

$\int _{0}^{x} f(t) dt$

WEIBULL

Weibull

$\theta > 0$, $\tau > 0$

$f(x) =$

$\frac{1}{x} \tau z^\tau e^{-z^\tau }$

     

$F(x) =$

$1 - e^{-z^\tau }$

Notes:

1. $z = x/\theta $, wherever $z$ is used.

2. $\theta $ denotes the scale parameter for all the distributions. For LOGN, $\log (\theta ) = \mu $.

3. Parameters are listed in the order in which they are defined in the distribution model.

4. $\gamma (a,b) = \int _{0}^{b} t^{a-1} e^{-t} dt$ is the lower incomplete gamma function.

5. $\Phi (y) =\frac{1}{2}\left(1+\mr {erf}\left(\frac{y}{\sqrt {2}}\right)\right)$ is the standard normal CDF.

6. For more information, see the section Tweedie Distributions.


Tweedie Distributions

Tweedie distributions are a special case of the exponential dispersion family (Jørgensen, 1987) with a property that the variance of the distribution is equal to $\phi \mu ^ p$, where $\mu $ is the mean of the distribution, $\phi $ is a dispersion parameter, and $p$ is an index parameter as discovered by Tweedie (1984). The distribution is defined for all values of $p$ except for values of $p$ in the open interval $(0,1)$. Many important known distributions are a special case of Tweedie distributions including normal ($p$=0), Poisson ($p$=1), gamma ($p$=2), and the inverse Gaussian ($p$=3). Apart from these special cases, the probability density function (PDF) of the Tweedie distribution does not have an analytic expression. For $p > 1$, it has the form (Dunn and Smyth, 2005),

\[  f(x; \mu , \phi , p) = a(x,\phi ) \exp \left[ \frac{1}{\phi } \left( \frac{x \mu ^{1-p}}{1-p} - \kappa (\mu ,p) \right) \right]  \]

where $\kappa (\mu ,p) = \mu ^{2-p}/(2-p)$ for $p \neq 2$ and $\kappa (\mu ,p) = \log (\mu )$ for $p=2$. The function $a(x, \phi )$ does not have an analytical expression. It is typically evaluated using series expansion methods described in Dunn and Smyth (2005).

For $1 < p < 2$, the Tweedie distribution is a compound Poisson-gamma mixture distribution, which is the distribution of $S$ defined as

\[  S = \sum _{i=1}^{N} X_ i  \]

where $N \sim \text {Poisson}(\lambda )$ and $X_ i \sim \text {gamma}(\alpha , \theta )$ are independent and identically distributed gamma random variables with shape parameter $\alpha $ and scale parameter $\theta $. At $X=0$, the density is a probability mass that is governed by the Poisson distribution, and for values of $X > 0$, it is a mixture of gamma variates with Poisson mixing probability. The parameters $\lambda $, $\alpha $, and $\theta $ are related to the natural parameters $\mu $, $\phi $, and $p$ of the Tweedie distribution as

$\displaystyle  \lambda  $
$\displaystyle = \frac{\mu ^{2-p}}{\phi (2-p)}  $
$\displaystyle \alpha  $
$\displaystyle = \frac{2-p}{p-1}  $
$\displaystyle \theta  $
$\displaystyle = \phi (p-1) \mu ^{p-1}  $

The mean of a Tweedie distribution is positive for $p > 1$.

Two predefined versions of the Tweedie distribution are provided with the HPSEVERITY procedure. The first version, named TWEEDIE and defined for $p > 1$, has the natural parameterization with parameters $\mu $, $\phi $, and $p$. The second version, named STWEEDIE and defined for $1 < p < 2$, is the version with a scale parameter. It corresponds to the compound Poisson-gamma distribution with gamma scale parameter $\theta $, Poisson mean parameter $\lambda $, and the index parameter $p$. The index parameter decides the shape parameter $\alpha $ of the gamma distribution as

\[  \alpha = \frac{2-p}{p-1}  \]

The parameters $\theta $ and $\lambda $ of the STWEEDIE distribution are related to the parameters $\mu $ and $\phi $ of the TWEEDIE distribution as

$\displaystyle  \mu  $
$\displaystyle = \lambda \theta \alpha  $
$\displaystyle \phi  $
$\displaystyle = \frac{(\lambda \theta \alpha )^{2-p}}{\lambda (2-p)} = \frac{\theta }{(p-1) (\lambda \theta \alpha )^{p-1}}  $

You can fit either version when there are no regression variables. Each version has its own merits. If you fit the TWEEDIE version, you have the direct estimate of the overall mean of the distribution. If you are interested in the most practical range of the index parameter $1 < p < 2$, then you can fit the STWEEDIE version, which provides you direct estimates of the Poisson and gamma components that comprise the distribution (an estimate of the gamma shape parameter $\alpha $ is easily obtained from the estimate of $p$).

If you want to estimate the effect of exogenous (regression) variables on the distribution, then you must use the STWEEDIE version, because PROC HPSEVERITY requires a distribution to have a scale parameter in order to estimate regression effects. For more information, see the section Estimating Regression Effects. The gamma scale parameter $\theta $ is the scale parameter of the STWEEDIE distribution. If you are interested in determining the effect of regression variables on the mean of the distribution, you can do so by first fitting the STWEEDIE distribution to determine the effect of the regression variables on the scale parameter $\theta $. Then, you can easily estimate how the mean of the distribution $\mu $ is affected by the regression variables using the relationship $\mu = c \theta $, where $c = \lambda \alpha = \lambda (2-p)/(p-1)$. The estimates of the regression parameters remain the same, whereas the estimate of the intercept parameter is adjusted by the estimates of the $\lambda $ and $p$ parameters.

Parameter Initialization for Predefined Distributions

The parameters are initialized by using the method of moments for all the distributions, except for the gamma and the Weibull distributions. For the gamma distribution, approximate maximum likelihood estimates are used. For the Weibull distribution, the method of percentile matching is used.

Given $n$ observations of the severity value $y_ i$ ($1 \le i \le n$), the estimate of $k$th raw moment is denoted by $m_ k$ and computed as

\[  m_ k = \frac{1}{n} \sum _{i=1}^{n} y_ i^ k  \]

The 100$p$th percentile is denoted by $\pi _ p$ ($0 \le p \le 1$). By definition, $\pi _ p$ satisfies

\[  F(\pi _ p-) \le p \le F(\pi _ p)  \]

where $F(\pi _ p-) = \lim _{h \downarrow 0} F(\pi _ p - h)$. PROC HPSEVERITY uses the following practical method of computing $\pi _ p$. Let $\hat{F}_ n(y)$ denote the empirical distribution function (EDF) estimate at a severity value $y$. Let $y_ p^-$ and $y_ p^+$ denote two consecutive values in the ascending sequence of $y$ values such that $\hat{F}_ n(y_ p^-) < p$ and $\hat{F}_ n(y_ p^+) \ge p$. Then, the estimate $\hat{\pi }_ p$ is computed as

\[  \hat{\pi }_ p = y_ p^- + \frac{p - \hat{F}_ n(y_ p^-)}{\hat{F}_ n(y_ p^+) - \hat{F}_ n(y_ p^-)} (y_ p^+ - y_ p^-)  \]

Let $\epsilon $ denote the smallest double-precision floating-point number such that $1 + \epsilon > 1$. This machine precision constant can be obtained by using the CONSTANT function in Base SAS software.

The details of how parameters are initialized for each predefined distribution are as follows:

BURR

The parameters are initialized by using the method of moments. The $k$th raw moment of the Burr distribution is:

\[  E[X^ k] = \frac{\theta ^ k \Gamma (1 + k/\gamma ) \Gamma (\alpha - k/\gamma )}{\Gamma (\alpha )}, \quad -\gamma < k < \alpha \gamma  \]

Three moment equations $E[X^ k] = m_ k$ ($k=1,2,3$) need to be solved for initializing the three parameters of the distribution. In order to get an approximate closed form solution, the second shape parameter $\hat{\gamma }$ is initialized to a value of $2$. If $2 m_3 - 3 m_1 m_2 > 0$, then simplifying and solving the moment equations yields the following feasible set of initial values:

\[  \hat{\theta } = \sqrt {\frac{m_2 m_3}{2 m_3 - 3 m_1 m_2}}, \quad \hat{\alpha } = 1 + \frac{m_3}{2 m_3 - 3 m_1 m_2}, \quad \hat{\gamma } = 2  \]

If $2 m_3 - 3 m_1 m_2 < \epsilon $, then the parameters are initialized as follows:

\[  \hat{\theta } = \sqrt {m_2}, \quad \hat{\alpha } = 2, \quad \hat{\gamma } = 2  \]
EXP

The parameters are initialized by using the method of moments. The $k$th raw moment of the exponential distribution is:

\[  E[X^ k] = \theta ^ k \Gamma (k+1), \quad k > -1  \]

Solving $E[X] = m_1$ yields the initial value of $\hat{\theta } = m_1$.

GAMMA

The parameter $\alpha $ is initialized by using its approximate maximum likelihood (ML) estimate. For a set of $n$ independent and identically distributed observations $y_ i$ ($1 \le i \le n$) drawn from a gamma distribution, the log likelihood $l$ is defined as follows:

$\displaystyle  l  $
$\displaystyle = \sum _{i=1}^{n} \log \left( y_ i^{\alpha -1} \frac{e^{-y_ i/\theta }}{\theta ^\alpha \Gamma (\alpha )} \right)  $
$\displaystyle  $
$\displaystyle = (\alpha - 1) \sum _{i=1}^{n} \log (y_ i) - \frac{1}{\theta } \sum _{i=1}^{n} y_ i - n \alpha \log (\theta ) - n \log (\Gamma (\alpha ))  $

Using a shorter notation of $\sum $ to denote $\sum _{i=1}^{n}$ and solving the equation $\partial l/\partial \theta = 0$ yields the following ML estimate of $\theta $:

\[  \hat{\theta } = \frac{\sum y_ i}{n \alpha } = \frac{m_1}{\alpha }  \]

Substituting this estimate in the expression of $l$ and simplifying gives

\[  l = (\alpha - 1) \sum \log (y_ i) - n \alpha - n \alpha \log (m_1) + n \alpha \log (\alpha ) - n \log (\Gamma (\alpha ))  \]

Let $d$ be defined as follows:

\[  d = \log (m_1) - \frac{1}{n} \sum \log (y_ i)  \]

Solving the equation $\partial l/\partial \alpha = 0$ yields the following expression in terms of the digamma function, $\psi (\alpha )$:

\[  \log (\alpha ) - \psi (\alpha ) = d  \]

The digamma function can be approximated as follows:

\[  \hat{\psi }(\alpha ) \approx \log (\alpha ) - \frac{1}{\alpha } \left(0.5 + \frac{1}{12 \alpha + 2}\right)  \]

This approximation is within 1.4% of the true value for all the values of $\alpha > 0$ except when $\alpha $ is arbitrarily close to the positive root of the digamma function (which is approximately 1.461632). Even for the values of $\alpha $ that are close to the positive root, the absolute error between true and approximate values is still acceptable ($|\hat{\psi }(\alpha ) - \psi (\alpha )| < 0.005$ for $\alpha > 1.07$). Solving the equation that arises from this approximation yields the following estimate of $\alpha $:

\[  \hat{\alpha } = \frac{3 - d + \sqrt {(d-3)^2 + 24 d}}{12 d}  \]

If this approximate ML estimate is infeasible, then the method of moments is used. The $k$th raw moment of the gamma distribution is:

\[  E[X^ k] = \theta ^ k \frac{\Gamma (\alpha + k)}{\Gamma (\alpha )}, \quad k > -\alpha  \]

Solving $E[X] = m_1$ and $E[X^2] = m_2$ yields the following initial value for $\alpha $:

\[  \hat{\alpha } = \frac{m_1^2}{m_2 - m_1^2}  \]

If $m_2 - m_1^2 < \epsilon $ (almost zero sample variance), then $\alpha $ is initialized as follows:

\[  \hat{\alpha } = 1  \]

After computing the estimate of $\alpha $, the estimate of $\theta $ is computed as follows:

\[  \hat{\theta } = \frac{m_1}{\hat{\alpha }}  \]

Both the maximum likelihood method and the method of moments arrive at the same relationship between $\hat{\alpha }$ and $\hat{\theta }$.

GPD

The parameters are initialized by using the method of moments. Notice that for $\xi > 0$, the CDF of the generalized Pareto distribution (GPD) is:

$\displaystyle  F(x)  $
$\displaystyle = 1 - \left(1 + \frac{\xi x}{\theta }\right)^{-1/\xi }  $
$\displaystyle  $
$\displaystyle = 1 - \left(\frac{\theta /\xi }{x + \theta /\xi }\right)^{1/\xi }  $

This is equivalent to a Pareto distribution with scale parameter $\theta _1 = \theta /\xi $ and shape parameter $\alpha = 1/\xi $. Using this relationship, the parameter initialization method used for the PARETO distribution is used to get the following initial values for the parameters of the GPD distribution:

\[  \hat{\theta } = \frac{m_1 m_2}{2 (m_2 - m_1^2)}, \quad \hat{\xi } = \frac{m_2 - 2 m_1^2}{2 (m_2 - m_1^2)}  \]

If $m_2 - m_1^2 < \epsilon $ (almost zero sample variance) or $m_2 - 2 m_1^2 < \epsilon $, then the parameters are initialized as follows:

\[  \hat{\theta } = \frac{m_1}{2}, \quad \hat{\xi } = \frac{1}{2}  \]
IGAUSS

The parameters are initialized by using the method of moments. The standard parameterization of the inverse Gaussian distribution (also known as the Wald distribution), in terms of the location parameter $\mu $ and shape parameter $\lambda $, is as follows (Klugman, Panjer, and Willmot, 1998, p. 583):

$\displaystyle  f(x)  $
$\displaystyle = \sqrt {\frac{\lambda }{2 \pi x^3}} \:  \exp \left(\frac{-\lambda (x-\mu )^2}{2 \mu ^2 x}\right)  $
$\displaystyle F(x)  $
$\displaystyle = \Phi \left(\left(\frac{x}{\mu }-1\right)\sqrt {\frac{\lambda }{x}}\right) + \Phi \left(-\left(\frac{x}{\mu }+1\right)\sqrt {\frac{\lambda }{x}}\right)\  \exp \left(\frac{2\lambda }{\mu }\right)  $

For this parameterization, it is known that the mean is $E[X] = \mu $ and the variance is $Var[X] = \mu ^3/\lambda $, which yields the second raw moment as $E[X^2] = \mu ^2 (1 + \mu /\lambda )$ (computed by using $E[X^2] = Var[X] + (E[X])^2$).

The predefined IGAUSS distribution in PROC HPSEVERITY uses the following alternate parameterization to allow the distribution to have a scale parameter, $\theta $:

$\displaystyle  f(x)  $
$\displaystyle = \sqrt {\frac{\alpha \theta }{2 \pi x^3}} \:  \exp \left(\frac{- \alpha (x-\theta )^2}{2 x \theta }\right)  $
$\displaystyle F(x)  $
$\displaystyle = \Phi \left(\left(\frac{x}{\theta }-1\right)\sqrt {\frac{\alpha \theta }{x}}\right) + \Phi \left(-\left(\frac{x}{\theta }+1\right)\sqrt {\frac{\alpha \theta }{x}}\right)\  \exp \left(2\alpha \right)  $

The parameters $\theta $ (scale) and $\alpha $ (shape) of this alternate form are related to the parameters $\mu $ and $\lambda $ of the preceding form such that $\theta = \mu $ and $\alpha = \lambda /\mu $. Using this relationship, the first and second raw moments of the IGAUSS distribution are:

$\displaystyle  E[X]  $
$\displaystyle = \theta  $
$\displaystyle E[X^2]  $
$\displaystyle = \theta ^2 \left(1 + \frac{1}{\alpha }\right)  $

Solving $E[X] = m_1$ and $E[X^2] = m_2$ yields the following initial values:

\[  \hat{\theta } = m_1, \quad \hat{\alpha } = \frac{m_1^2}{m_2 - m_1^2}  \]

If $m_2 - m_1^2 < \epsilon $ (almost zero sample variance), then the parameters are initialized as follows:

\[  \hat{\theta } = m_1, \quad \hat{\alpha } = 1  \]
LOGN

The parameters are initialized by using the method of moments. The $k$th raw moment of the lognormal distribution is:

\[  E[X^ k] = \exp \left(k \mu + \frac{k^2 \sigma ^2}{2}\right)  \]

Solving $E[X] = m_1$ and $E[X^2] = m_2$ yields the following initial values:

\[  \hat{\mu } = 2 \log (m1) - \frac{\log (m2)}{2}, \quad \hat{\sigma } = \sqrt {\log (m2) - 2 \log (m1)}  \]
PARETO

The parameters are initialized by using the method of moments. The $k$th raw moment of the Pareto distribution is:

\[  E[X^ k] = \frac{\theta ^ k \Gamma (k + 1) \Gamma (\alpha - k)}{\Gamma (\alpha )}, -1 < k < \alpha  \]

Solving $E[X] = m_1$ and $E[X^2] = m_2$ yields the following initial values:

\[  \hat{\theta } = \frac{m_1 m_2}{m_2 - 2 m_1^2}, \quad \hat{\alpha } = \frac{2(m_2 - m_1^2)}{m_2 - 2 m_1^2}  \]

If $m_2 - m_1^2 < \epsilon $ (almost zero sample variance) or $m_2 - 2 m_1^2 < \epsilon $, then the parameters are initialized as follows:

\[  \hat{\theta } = m_1, \quad \hat{\alpha } = 2  \]
TWEEDIE

The parameter $p$ is initialized by assuming that the sample is generated from a gamma distribution with shape parameter $\alpha $ and by computing $\hat{p} = \frac{\hat{\alpha }+2}{\hat{\alpha }+1}$. The initial value $\hat{\alpha }$ is obtained from using the method previously described for the GAMMA distribution. The parameter $\mu $ is the mean of the distribution. Hence, it is initialized to the sample mean as

\[  \hat{\mu } = m_1  \]

Variance of a Tweedie distribution is equal to $\phi \mu ^ p$. Thus, the sample variance is used to initialize the value of $\phi $ as

\[  \hat{\phi } = \frac{m_2 - m_1^2}{\hat{\mu }^{\hat{p}}}  \]
STWEEDIE

STWEEDIE is a compound Poisson-gamma mixture distribution with mean $\mu = \lambda \theta \alpha $, where $\alpha $ is the shape parameter of the gamma random variables in the mixture and the parameter $p$ is determined solely by $\alpha $. First, the parameter $p$ is initialized by assuming that the sample is generated from a gamma distribution with shape parameter $\alpha $ and by computing $\hat{p} = \frac{\hat{\alpha }+2}{\hat{\alpha }+1}$. The initial value $\hat{\alpha }$ is obtained from using the method previously described for the GAMMA distribution. As done for initializing the parameters of the TWEEDIE distribution, the sample mean and variance are used to compute the values $\hat{\mu }$ and $\hat{\phi }$ as

$\displaystyle  \hat{\mu }  $
$\displaystyle = m_1  $
$\displaystyle \hat{\phi }  $
$\displaystyle = \frac{m_2 - m_1^2}{\hat{\mu }^{\hat{p}}}  $

Based on the relationship between the parameters of TWEEDIE and STWEEDIE distributions described in the section Tweedie Distributions, values of $\theta $ and $\lambda $ are initialized as

$\displaystyle  \hat{\theta }  $
$\displaystyle = \hat{\phi } (\hat{p}-1) \hat{\mu }^{p-1}  $
$\displaystyle \hat{\lambda }  $
$\displaystyle = \frac{\hat{\mu }}{\hat{\theta }\hat{\alpha }}  $
WEIBULL

The parameters are initialized by using the percentile matching method. Let $q1$ and $q3$ denote the estimates of the $25$th and $75$th percentiles, respectively. Using the formula for the CDF of Weibull distribution, they can be written as

$\displaystyle  1 - \exp (-(q1/\theta )^\tau )  $
$\displaystyle = 0.25  $
$\displaystyle 1 - \exp (-(q3/\theta )^\tau )  $
$\displaystyle = 0.75  $

Simplifying and solving these two equations yields the following initial values:

\[  \hat{\theta } = \exp \left(\frac{r \log (q1) - \log (q3)}{r - 1}\right), \quad \hat{\tau } = \frac{\log (\log (4))}{\log (q3) - \log (\hat{\theta })}  \]

where $r = \log (\log (4))/\log (\log (4/3))$. These initial values agree with those suggested in Klugman, Panjer, and Willmot (1998).

A summary of the initial values of all the parameters for all the predefined distributions is given in Table 5.3. The table also provides the names of the parameters to use in the INIT= option in the DIST statement if you want to provide a different initial value.

Table 5.3: Parameter Initialization for Predefined Distributions

Distribution

Parameter

Name for INIT option

Default Initial Value

BURR

$\theta $

theta

$\sqrt {\frac{m_2 m_3}{2 m_3 - 3 m_1 m_2}}$

 

$\alpha $

alpha

$1 + \frac{m_3}{2 m_3 - 3 m_1 m_2}$

 

$\gamma $

gamma

$2$

EXP

$\theta $

theta

$m_1$

GAMMA

$\theta $

theta

$m_1/\alpha $

 

$\alpha $

alpha

$\frac{3 - d + \sqrt {(d-3)^2 + 24 d}}{12 d}$

GPD

$\theta $

theta

$m_1 m_2/(2 (m_2 - m_1^2))$

 

$\xi $

xi

$(m_2 - 2 m_1^2)/(2 (m_2 - m_1^2))$

IGAUSS

$\theta $

theta

$m_1$

 

$\alpha $

alpha

$m_1^2/(m_2 - m_1^2)$

LOGN

$\mu $

mu

$2 \log (m1) - \log (m2)/2$

 

$\sigma $

sigma

$\sqrt {\log (m2) - 2 \log (m1)}$

PARETO

$\theta $

theta

$m_1 m_2/(m_2 - 2 m_1^2)$

 

$\alpha $

alpha

$2(m_2 - m_1^2)/(m_2 - 2 m_1^2)$

TWEEDIE

$\mu $

mu

$m_1$

 

$\phi $

phi

$(m_2 - m_1^2)/m_1^ p$

 

$p$

p

$(\alpha +2)/(\alpha +1)$

     

where $\alpha = \frac{3 - d + \sqrt {(d-3)^2 + 24 d}}{12 d}$

STWEEDIE

$\theta $

theta

$(m_2 - m_1^2)(p-1)/m_1$

 

$\lambda $

lambda

$m_1^2/(\alpha (m_2 - m_1^2)(p-1))$

 

$p$

p

$(\alpha +2)/(\alpha +1)$

     

where $\alpha = \frac{3 - d + \sqrt {(d-3)^2 + 24 d}}{12 d}$

WEIBULL

$\theta $

theta

$\exp \left(\frac{r \log (q1) - \log (q3)}{r - 1}\right)$

 

$\tau $

tau

$\log (\log (4))/(\log (q3) - \log (\hat{\theta }))$

Notes:

$\bullet \quad $ $m_ k$ denotes the $k$th raw moment

$\bullet \quad $ $d = \log (m_1) - (\sum \log (y_ i))/n$

$\bullet \quad $ $q1$ and $q3$ denote the $25$th and $75$th percentiles, respectively

$\bullet \quad $ $r = \log (\log (4))/\log (\log (4/3))$