The HPSEVERITY procedure is a high-performance version of the SEVERITY procedure in SAS/ETS^{®} software. Like the SEVERITY procedure, the HPSEVERITY procedure fits models for statistical
distributions of the severity (magnitude) of events.

Unlike the SEVERITY procedure, which can be run only on an individual workstation, the HPSEVERITY procedure takes advantage of a computing environment that enables it to distribute the optimization task among one or more nodes. In addition, each node can use one or more threads to carry out the optimization on its subset of the data. When several nodes are used, with each node using several threads to carry out its part of the work, the result is a highly parallel computation that provides a dramatic gain in performance.

Some examples of events that are typically modeled using PROC HPSEVERITY are insurance loss payments, operational losses at a bank, and intermittent sales of products. The magnitude of events can be modeled as a random variable that has a continuous parametric probability distribution.

The HPSEVERITY procedure is delivered with a set of predefined models for several commonly used distributions. These include the Burr, exponential, gamma, generalized Pareto, inverse Gaussian (Wald), lognormal, Pareto, Tweedie, and Weibull distributions. However, you can extend this set to fit any continuous parametric distribution by using the FCMP procedure to specify a set of functions and subroutines that define the distribution.

The HPSEVERITY procedure uses the maximum likelihood method to fit specified distributions. You can also estimate the parameters by minimizing your own objective function.

PROC HPSEVERITY can fit multiple distributions at the same time and identify the best distribution according to a selection criterion that you specify. You can use seven different statistics of fit as selection criteria. They are log likelihood, Akaike's information criterion (AIC), corrected Akaike's information criterion (AICC), Schwarz Bayesian information criterion (BIC), Kolmogorov-Smirnov statistic (KS), Anderson-Darling statistic (AD), and Cram�r-von Mises statistic (CvM).

You can specify exogenous variables for fitting a model that has a scale parameter. The exogenous variables are modeled such that their linear combination affects the scale parameter via a specified link function. The regression coefficients that are associated with the variables in the linear combination are estimated along with the parameters of the distribution.

For further details, see the *SAS/ETS ^{®} User's Guide: High Performance Procedures*

- A Simple Example of Fitting Predefined Distributions
- An Example with Left-Truncation and Right-Censoring
- An Example of Modeling Regression Effects

- Example 21.1: Defining a Model for Gaussian Distribution
- Example 21.2: Defining a Model for the Gaussian Distribution with a Scale Parameter
- Example 21.3: Defining a Model for Mixed-Tail Distributions
- Example 21.4: Fitting a Scaled Tweedie Model with Regressors
- Example 21.5: Fitting Distributions to Interval-Censored Data
- Example 21.6: Benefits of Distributed and Multithreaded Computing
- Example 21.7: Estimating Parameters Using the Cramér–von Mises Estimator
- Example 21.8: Defining a Finite Mixture Model That Has a Scale Parameter
- Example 21.9: Predicting Mean and Value-at-Risk by Using Scoring Functions
- Example 21.10: Scale Regression with Rich Regression Effects