HISTOGRAM Statement: CAPABILITY Procedure

Printed Output

If you request a fitted parametric distribution, printed output summarizing the fit is produced in addition to the graphical display. Figure 5.16 shows the printed output for a fitted lognormal distribution requested by the following statements:

proc capability data=Hang;
   spec target=14 lsl=13.95 usl=14.05;
   hist / lognormal(indices midpercents);
run;

Figure 5.16: Sample Summary of Fitted Distribution

The CAPABILITY Procedure
Fitted Lognormal Distribution for Width (Width in cm)

Parameters for Lognormal Distribution
Parameter Symbol Estimate
Threshold Theta 0
Scale Zeta 2.638966
Shape Sigma 0.001497
Mean   13.99873
Std Dev   0.020952

Goodness-of-Fit Tests for Lognormal Distribution
Test Statistic DF p Value
Kolmogorov-Smirnov D 0.09148348   Pr > D >0.150
Cramer-von Mises W-Sq 0.05040427   Pr > W-Sq >0.500
Anderson-Darling A-Sq 0.33476355   Pr > A-Sq >0.500
Chi-Square Chi-Sq 2.87938822 3 Pr > Chi-Sq 0.411

Percent Outside Specifications for Lognormal Distribution
Lower Limit Upper Limit
LSL 13.950000 USL 14.050000
Obs Pct < LSL 2.000000 Obs Pct > USL 0
Est Pct < LSL 0.992170 Est Pct > USL 0.728125

Capability Indices
Based on Lognormal
Distribution
Cp 0.795463
CPL 0.776822
CPU 0.814021
Cpk 0.776822
Cpm 0.792237

Histogram Bin Percents
for Lognormal Distribution
Bin
Midpoint
Percent
Observed Estimated
13.95 4.000 2.963
13.97 18.000 15.354
13.99 26.000 33.872
14.01 38.000 32.055
14.03 10.000 13.050
14.05 4.000 2.281

Quantiles for Lognormal Distribution
Percent Quantile
Observed Estimated
1.0 13.9440 13.9501
5.0 13.9656 13.9643
10.0 13.9710 13.9719
25.0 13.9860 13.9846
50.0 14.0018 13.9987
75.0 14.0129 14.0129
90.0 14.0218 14.0256
95.0 14.0241 14.0332
99.0 14.0470 14.0475


The summary is organized into the following parts:

  • Parameters

  • Chi-Square Goodness-of-Fit Test

  • EDF Goodness-of-Fit Tests

  • Specifications

  • Indices Using the Fitted Curve

  • Histogram Intervals

  • Quantiles

These parts are described in the sections that follow.

Parameters

This section lists the parameters for the fitted curve as well as the estimated mean and estimated standard deviation. See Formulas for Fitted Curves.

Chi-Square Goodness-of-Fit Test

The chi-square goodness-of-fit statistic for a fitted parametric distribution is computed as follows:

\[  \chi ^2 = \sum _{i=1}^{m} \frac{ ( O_{i} -E_{i} )^2 }{E_{i}}  \]

where $O_{i} =$ observed value in ith histogram interval $E_{i} =$ expected value in ith histogram interval m = number of histogram intervals p = number of estimated parameters

The degrees of freedom for the chi-square test is equal to $m-p-1$. You can save the observed and expected interval values in the OUTFIT= data set discussed in Output Data Sets.

Note that empty intervals are not combined, and the range of intervals used to compute $\chi ^2$ begins with the first interval containing observations and ends with the final interval containing observations.

EDF Goodness-of-Fit Tests

When you fit a parametric distribution, the HISTOGRAM statement provides a series of goodness-of-fit tests based on the empirical distribution function (EDF). The EDF tests offer advantages over the chi-square goodness-of-fit test, including improved power and invariance with respect to the histogram midpoints. For a thorough discussion, refer to D’Agostino and Stephens (1986).

The empirical distribution function is defined for a set of n independent observations $X_1,\ldots ,X_ n$ with a common distribution function $F(x)$. Denote the observations ordered from smallest to largest as $X_{(1)},\ldots ,X_{(n)}$. The empirical distribution function, $F_ n(x)$, is defined as

\[  \begin{array}{llr} F_ n(x) = 0, &  x < X_{(1)} \\ F_ n(x) = \frac{i}{n}, &  X_{(i)} \leq x < X_{(i+1)} &  i=1,\ldots ,n-1 \\ \nonumber F_ n(x) = 1, &  X_{(n)} \leq x \end{array}  \]

Note that $F_ n(x)$ is a step function that takes a step of height $\frac{1}{n}$ at each observation. This function estimates the distribution function $F(x)$. At any value x, $F_ n(x)$ is the proportion of observations less than or equal to x, while $F(x)$ is the probability of an observation less than or equal to x. EDF statistics measure the discrepancy between $F_ n(x)$ and $F(x)$.

The computational formulas for the EDF statistics make use of the probability integral transformation $U=F(X)$. If $F(X)$ is the distribution function of X, the random variable U is uniformly distributed between 0 and 1.

Given n observations $X_{(1)},\ldots ,X_{(n)}$, the values $U_{(i)}=F(X_{(i)})$ are computed by applying the transformation, as shown in the following sections.

The HISTOGRAM statement provides three EDF tests:

  • Kolmogorov-Smirnov

  • Anderson-Darling

  • Cramér-von Mises

These tests are based on various measures of the discrepancy between the empirical distribution function $F_ n(x)$ and the proposed parametric cumulative distribution function $F(x)$.

The following sections provide formal definitions of the EDF statistics.

Kolmogorov-Smirnov Statistic

The Kolmogorov-Smirnov statistic (D) is defined as

\[  D = \mbox{sup}_ x|F_{n}(x)-F(x)|  \]

The Kolmogorov-Smirnov statistic belongs to the supremum class of EDF statistics. This class of statistics is based on the largest vertical difference between $F(x)$ and $F_ n(x)$.

The Kolmogorov-Smirnov statistic is computed as the maximum of $D^{+}$ and $D^{-}$, where $D^{+}$ is the largest vertical distance between the EDF and the distribution function when the EDF is greater than the distribution function, and $D^{-}$ is the largest vertical distance when the EDF is less than the distribution function.

\[  \begin{array}{lll} D^{+} &  = &  \max _{i}\left(\frac{i}{n} - U_{(i)}\right) \\ D^{-} &  = &  \max _{i}\left(U_{(i)} - \frac{i-1}{n}\right) \\ D &  = &  \max \left(D^{+},D^{-}\right) \end{array}  \]
Anderson-Darling Statistic

The Anderson-Darling statistic and the Cramér-von Mises statistic belong to the quadratic class of EDF statistics. This class of statistics is based on the squared difference $\left(F_ n(x)- F(x)\right)^2$. Quadratic statistics have the following general form:

\[  Q = n \int _{-\infty }^{+\infty } \left(F_ n(x)-F(x)\right)^2 \psi (x) dF(x)  \]

The function $\psi (x)$ weights the squared difference $\left(F_ n(x)- F(x)\right)^2$.

The Anderson-Darling statistic ($A^2$) is defined as

\[  A^{2} = n\int _{-\infty }^{+\infty }\left(F_ n(x)-F(x)\right)^2 \left[F(x)\left(1-F(x)\right)\right]^{-1} dF(x)  \]

Here the weight function is $\psi (x) = \left[F(x)\left(1-F(x)\right)\right]^{-1}$.

The Anderson-Darling statistic is computed as

\[  A^2 = -n-\frac{1}{n}\sum _{i=1}^ n \left[(2i-1)\log U_{(i)} + (2n+1-2i) \log \left(\{ 1-U_{(i)}\right)\right]  \]
Cramér-von Mises Statistic

The Cramér-von Mises statistic ($W^2$) is defined as

\[  W^2 = n \int _{-\infty }^{+\infty } \left(F_{n}(x)-F(x)\right)^2 dF(x)  \]

Here the weight function is $ \psi (x) = 1$.

The Cramér-von Mises statistic is computed as

\[  W^2 = \sum _{i=1}^ n\left(U_{(i)}-\frac{2i-1}{2n}\right)^2 + \frac{1}{12n}  \]

Probability Values for EDF Tests

Once the EDF test statistics are computed, the associated probability values (p-values) must be calculated.

For the Gumbel, inverse Gaussian, generalized Pareto, and Rayleigh distributions, the procedure computes associated probability values (p-values) by resampling from the estimated distribution. It generates k random samples of size n, where k is specified by the EDFNSAMPLES= option and n is the number of observations in the original data. EDF test statistics are computed for each sample, and the p-value is the proportion of samples whose EDF statistic is greater than or equal to the statistic computed for the original data. You can use the EDFSEED= option to specify a seed value for generating the sample values.

For the beta, exponential, gamma, lognormal, normal, power function, and Weibull distributions, the CAPABILITY procedure uses internal tables of probability levels similar to those given by D’Agostino and Stephens (1986). If the value is between two probability levels, then linear interpolation is used to estimate the probability value. The probability value depends upon the parameters that are known and the parameters that are estimated for the distribution you are fitting. Table 5.23 summarizes different combinations of estimated parameters for which EDF tests are available.

Table 5.23: Availability of EDF Tests

Distribution

Parameters

Tests Available

 

Threshold

Scale

Shape

 

Beta

$\theta $ known

$\sigma $ known

$\alpha , \beta $ known

all

 

$\theta $ known

$\sigma $ known

$\alpha ,\beta <5$ unknown

all

Exponential

$\theta $ known,

$\sigma $ known

 

all

 

$\theta $ known

$\sigma $ unknown

 

all

 

$\theta $ unknown

$\sigma $ known

 

all

 

$\theta $ unknown

$ \sigma $ unknown

 

all

Gamma

$\theta $ known

$\sigma $ known

$\alpha $ known

all

 

$\theta $ known

$\sigma $ unknown

$\alpha $ known

all

 

$\theta $ known

$ \sigma $ known

$\alpha $ unknown

all

 

$\theta $ known

$\sigma $ unknown

$\alpha >1$ unknown

all

 

$\theta $ unknown

$\sigma $ known

$\alpha >1$ known

all

 

$\theta $ unknown

$\sigma $ unknown

$\alpha >1$ known

all

 

$\theta $ unknown

$\sigma $ known

$\alpha >1$ unknown

all

 

$\theta $ unknown

$\sigma $ unknown

$\alpha >1$ unknown

all

Lognormal

$\theta $ known

$\zeta $ known

$\sigma $ known

all

 

$\theta $ known

$\zeta $ known

$\sigma $ unknown

$A^2$ and $W^2$

 

$\theta $ known

$\zeta $ unknown

$\sigma $ known

$A^2$ and $W^2$

 

$\theta $ known

$\zeta $ unknown

$\sigma $ unknown

all

 

$\theta $ unknown

$\zeta $ known

$\sigma <3$ known

all

 

$\theta $ unknown

$\zeta $ known

$\sigma <3$ unknown

all

 

$\theta $ unknown

$\zeta $ unknown

$\sigma <3$ known

all

 

$\theta $ unknown

$\zeta $ unknown

$\sigma <3$ unknown

all

Normal

$\theta $ known

$\sigma $ known

 

all

 

$\theta $ known

$\sigma $ unknown

 

$A^2$ and $W^2$

 

$\theta $ unknown

$\sigma $ known

 

$A^2$ and $W^2$

 

$\theta $ unknown

$ \sigma $ unknown

 

all

Weibull

$\theta $ known

$\sigma $ known

c known

all

 

$\theta $ known

$\sigma $ unknown

c known

$A^2$ and $W^2$

 

$\theta $ known

$\sigma $ known

c unknown

$A^2$ and $W^2$

 

$\theta $ known

$\sigma $ unknown

c unknown

$A^2$ and $W^2$

 

$\theta $ unknown

$\sigma $ known

c > 2 known

all

 

$\theta $ unknown

$\sigma $ unknown

c > 2 known

all

 

$\theta $ unknown

$\sigma $ known

c > 2 unknown

all

 

$\theta $ unknown

$\sigma $ unknown

c > 2 unknown

all


Specifications

This section is included in the summary only if you provide specification limits, and it tabulates the limits as well as the observed percentages and estimated percentages outside the limits.

The estimated percentages are computed only if fitted distributions are requested and are based on the probability that an observed value exceeds the specification limits, assuming the fitted distribution. The observed percentages are the percents of observations outside the specification limits.

Indices Using Fitted Curves

This section is included in the summary only if you specify the INDICES option in parentheses after a distribution option, as in the statements that produce Figure 5.16. Standard process capability indices, such as $C_{p}$ and $C_{pk}$, are not appropriate if the data are not normally distributed. The INDICES option computes generalizations of the standard indices by using the fact that for the normal distribution, $3\sigma $ is both the distance from the lower 0.135 percentile to the median (or mean) and the distance from the median (or mean) to the upper 99.865 percentile. These percentiles are estimated from the fitted distribution, and the appropriate percentile-to-median distances are substituted for $3\sigma $ in the standard formulas.

Writing T for the target, LSL and USL for the lower and upper specification limits, and $P_{\alpha }$ for the $100\alpha $th percentile, the generalized capability indices are as follows:

\[  CPL = \frac{P_{0.5} - \mi {LSL} }{P_{0.5}-P_{0.00135}}  \]
\[  CPU = \frac{\mi {USL} - P_{0.5} }{P_{0.99865}-P_{0.5}}  \]
\[  C_ p = \frac{\mi {USL} - \mi {LSL}}{P_{0.99865}-P_{0.00135}}  \]
\[  C_{pk} = \mbox{min}\left(\frac{P_{0.5} - \mbox{LSL}}{P_{0.5}-P_{0.00135}},\frac{\mbox{USL} - P_{0.5}}{P_{0.99865}-P_{0.5}}\right)  \]
\[  K = 2 \times \frac{\left|\frac{1}{2}(\mbox{USL}+\mbox{LSL}) - P_{0.5}\right|}{\mbox{USL} - \mbox{LSL} } \]
\[  C_{pm} = \frac{\mbox{min} \left( \frac{T-\mi {LSL}}{P_{0.5}-P_{0.00135}}, \frac{\mi {USL}-T}{P_{0.99865}-P_{0.5}}\right)}{\sqrt {1+\left(\frac{\mu - T}{\sigma }\right)^{2}}} \]

If the data are normally distributed, these formulas reduce to the formulas for the standard capability indices, which are given in the section Standard Capability Indices.

The following guidelines apply to the use of generalized capability indices requested with the INDICES option:

  • When you choose the family of parametric distributions for the fitted curve, consider whether an appropriate family can be derived from assumptions about the process.

  • Whenever possible, examine the data distribution with a histogram, probability plot, or quantile-quantile plot.

  • Apply goodness-of-fit tests to assess how well the parametric distribution models the data.

  • Consider whether a generalized index has a meaningful practical interpretation in your application.

At the time of this writing, there is ongoing research concerning the application of generalized capability indices, and it is important to note that other approaches can be used with nonnormal data:

  • Transform the data to normality, then compute and report standard capability indices on the transformed scale.

  • Report the proportion of nonconforming output estimated from the fitted distribution.

  • If it is not possible to adequately model the data distribution with a parametric density, smooth the data distribution with a kernel density estimate and simply report the proportion of nonconforming output.

Refer to Rodriguez and Bynum (1992) for additional discussion.

Histogram Intervals

This section is included in the summary only if you specify the MIDPERCENTS option in parentheses after the distribution option, as in the statements that produce Figure 5.16. This table lists the interval midpoints along with the observed and estimated percentages of the observations that lie in the interval. The estimated percentages are based on the fitted distribution.

In addition, you can specify the MIDPERCENTS option to request a table of interval midpoints with the observed percent of observations that lie in the interval. See the entry for the MIDPERCENTS option .

Quantiles

This table lists observed and estimated quantiles. You can use the PERCENTS= option to specify the list of quantiles to appear in this list. The list in Figure 5.16 is the default list. See the entry for the PERCENTS= option .