Analyzing latent constructs such as job satisfaction, motor ability, sensory recognition, or customer satisfaction requires instruments to accurately measure the constructs. Interrelated items can be summed to obtain an overall score for each participant. Cronbach’s coefficient alpha estimates the reliability of this type of scale by determining the internal consistency of the test or the average correlation of items within the test (Cronbach, 1951).
When a value is recorded, the observed value contains some degree of measurement error. Two sets of measurements on the same variable for the same individual might not have identical values. However, repeated measurements for a series of individuals will show some consistency. Reliability measures internal consistency from one set of measurements to another. The observed value is divided into two components, a true value and a measurement error . The measurement error is assumed to be independent of the true value; that is,
The reliability coefficient of a measurement test is defined as the squared correlation between the observed value and the true value ; that is,
which is the proportion of the observed variance due to true differences among individuals in the sample. If is the sum of several observed variables measuring the same feature, you can estimate . Cronbach’s coefficient alpha, based on a lower bound for , is an estimate of the reliability coefficient.
Suppose variables are used with for , where is the observed value, is the true value, and is the measurement error. The measurement errors () are independent of the true values () and are also independent of each other. Let be the total observed score and let be the total true score. Because
a lower bound for is given by
With for , a lower bound for the reliability coefficient, , is then given by the Cronbach’s coefficient alpha:
If the variances of the items vary widely, you can standardize the items to a standard deviation of 1 before computing the coefficient alpha. If the variables are dichotomous (0,1), the coefficient alpha is equivalent to the Kuder-Richardson 20 (KR-20) reliability measure.
When the correlation between each pair of variables is 1, the coefficient alpha has a maximum value of 1. With negative correlations between some variables, the coefficient alpha can have a value less than zero. The larger the overall alpha coefficient, the more likely that items contribute to a reliable scale. Nunnally and Bernstein (1994) suggests 0.70 as an acceptable reliability coefficient; smaller reliability coefficients are seen as inadequate. However, this varies by discipline.
To determine how each item reflects the reliability of the scale, you calculate a coefficient alpha after deleting each variable independently from the scale. Cronbach’s coefficient alpha from all variables except the th variable is given by
If the reliability coefficient increases after an item is deleted from the scale, you can assume that the item is not correlated highly with other items in the scale. Conversely, if the reliability coefficient decreases, you can assume that the item is highly correlated with other items in the scale. Refer to Yu (2001) for more information about how to interpret Cronach’s coefficient alpha.
Listwise deletion of observations with missing values is necessary to correctly calculate Cronbach’s coefficient alpha. PROC CORR does not automatically use listwise deletion if you specify the ALPHA option. Therefore, you should use the NOMISS option if the data set contains missing values. Otherwise, PROC CORR prints a warning message indicating the need to use the NOMISS option with the ALPHA option.