Contents: | Purpose / History / Requirements / Usage / Details / References |
The %BOOT macro does elementary nonparametric bootstrap analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution. Also, for regression models, the %BOOT macro can resample either observations or residuals.
The %BOOTCI macro computes several varieties of confidence intervals that are suitable for sampling distributions that are not normal.
%inc "<location of your file containing the JACK and BOOT macros>";
Following this statement, you may call the %JACK and %BOOT macros. See the Results tab for an example.
To use the %JACK or %BOOT macros, you must write a macro called %ANALYZE to do the data analysis that you want to bootstrap. The %ANALYZE macro must have two arguments:
DATA= the name of the input data set to analyze OUT= the name of the output data set containing the statistics for which you want to compute bootstrap distributions.
The %ANALYZE macro is run once by the %JACK or %BOOT macros to analyze the original data set. Then the resampled data sets are generated, and the %JACK or %BOOT macros run the %ANALYZE macro again to analyze each resample. There are two ways to analyze the resamples:
If you don't do anything special, a macro loop will be used. A macro loop takes much more computer time than BY processing but requires less disk space. It is usually better to use BY processing unless you have run out of disk space.
To use BY processing for the resamples, you must write the %ANALYZE macro to use BY processing. But instead of having ordinary BY statements in the %ANALYZE macro, you must use the %BYSTMT macro instead, which automatically generates an appropriate BY statement or not; that is, for the analysis of the resamples, the %BYSTMT macro generates a BY statement, but for the analysis of the original data, the %BYSTMT macro produces no BY statement (actually, it produces an empty BY statement, which has the same effect as not having a BY statement).
If the %ANALYZE macro uses the %BYSTMT macro, the %JACK and %BOOT macros create a huge data set (actually a view) containing all of the resampled data sets, and analyze all the resamples by running the %ANALYZE macro once with BY processing. If the %ANALYZE macro does not use the %BYSTMT macro, the %JACK and %BOOT macros execute a macro loop that generates and analyzes the resamples one at a time.
BY Processing in the Analysis
Ordinarily, you do not need to worry about what happens inside the %BYSTMT macro. However, if the analysis of the original data set requires BY processing, you will have to modify the %BYSTMT macro to include your BY variable(s) as well as the variable that the %JACK and %BOOT macros use to distinguish the resamples (which is referred to as &by in the %BYSTMT macro). If you're not in a hurry, you may find it simpler to forego use of the %BYSTMT macro.
Introduction
The %JACK macro does jackknife analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution.
The %BOOT macro does elementary nonparametric bootstrap analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution. Also, for regression models, the %BOOT macro can resample either observations or residuals.
The %BOOTCI macro computes several varieties of confidence intervals that are suitable for sampling distributions that are not normal.
In order to use the %JACK or %BOOT macros, you need to know enough about the SAS macro language to write simple macros yourself. See The SAS Guide to Macro Processing for information on the SAS macro language.
This document does not explain how the jackknife and bootstrap are performed or how the various confidence intervals are computed, but does provide some advice and caveats regarding usage. For an elementary introduction, see Dixon in the bibliography below. There is a thorough exposition in E&T (see references below) that should be accessible to anyone who has done a year or more of statistical study.
There is a widespread myth that bootstrapping is a magical spell to perform valid statistical inference on anything. S&T dispell this myth very effectively and very technically. For an elementary demonstration of the dangers of bootstrapping, see the "Cautionary Example" below.
The Jackknife
The jackknife works only for statistics that are smooth functions of the data. Statistics that are not smooth functions of the data, such as quantiles, may yield inconsistent jackknife estimates. The best results are obtained with statistics that are linear functions of the data. For highly nonlinear statistics, the jackknife can be inaccurate. See S&T, chapter 2, for a detailed discussion of the validity of the jackknife.
The Bootstrap
Bootstrap estimates of standard errors are valid for many commonly-used statistics, generally requiring no major assumptions other than simple random sampling and finite variance. There do exist some statistics for which the standard error estimates will fail, such as the maximum or minimum. The bootstrap standard error is consistent for some nonsmooth statistics such as the median. However, the bootstrap standard error may not be consistent even for very smooth statistics when the population distribution has very heavy tails. Inconsistency of the usual bootstrap estimators can often be remedied by using a resample size m(n) that is smaller than the sample size n, so that m(n)->infinity and m(n)/n->0 as n->infinity. Theoretical results on the consistency of the bootstrap standard error are not extensive. See S&T, chapter 3, for details.
The bootstrap estimates of bias provided by the %BOOT macro are valid under simple random sampling for many commonly-used plug-in estimators. A plug-in estimator is one that uses the same formula to compute an estimate from a sample that is used to compute a parameter from the population. For example, if the sample variance is computed with a divisor of n (VARDEF=N), it is a plug-in estimate; if it is computed with a divisor of n-1 (VARDEF=DF, the default), it is not a plug-in estimate. R2 is a plug-in estimator; adjusted R2 is not. Estimating the bias of a non-plug-in estimators requires special treatment; see "Bias Estimation" below. If you are using an estimator that is known to be unbiased, use the BIASCORR=0 argument with %BOOT. See E&T, chapter 10, for more discussion of bootstrap estimation of bias.
The approximate normal confidence intervals computed by the %BOOT macro are valid if both the bias and standard error estimates are valid and if the sampling distribution is approximately normal. For non-normal sampling distributions, you should use the %BOOTCI macro, which requires a much larger number of resamples for adequate approximation. If you plan to use only %BOOT, 200 resamples will typically be enough. If you plan to use %BOOTCI, 1000 or more resamples are likely to be needed for a 90% confidence interval; greater confidence levels require even more resamples. The proper use of bootstrap confidence intervals is a matter of considerable controversy; see S&T, chapter 4, for a review.
The %BOOT macro does balanced resampling when possible. Balanced resampling yields more accurate approximations to the ideal bootstrap estimators of bias and standard errors than does uniform resampling. Of course, both balanced resampling and uniform resampling produce approximations that converge to the same ideal bootstrap estimators as the number of resamples goes to infinity. Balanced resampling is of little benefit with %BOOTCI. See Hall, appendix II, for a discussion of balanced resampling and other methods from improving the computational efficiency of the bootstrap.
Output Data Sets
If the %ANALYZE macro uses the %BYSTMT macro, two output data sets are created by the %JACK macro:
Two similar data sets are also created by the %BOOT macro when the %BYSTMT macro is used:
In addition, the %JACK macro creates a data set JACKSTAT and the %BOOT macro creates a data set BOOTSTAT regardless of whether the %BYSTMT macro is used. These data sets contain the approximate standard errors, bias-corrrected estimates, and 95% confidence intervals assuming a normal sampling distribution. The %BOOTCI macro creates a data set BOOTCI containing the confidence intervals.
If the OUT= data set contains more than one observation per BY group, you must specify a list of ID= variables when you run the %JACK or %BOOT macros. These ID= variables identify observations that correspond to the same statistic in different BY groups. For many procedures, these ID= variables would naturally be _TYPE_ and _NAME_, but those names are not allowed to be used as ID= variables--you must use the RENAME= data set option to rename them. (Renaming variables can be tricky. You must use the old name with the DROP= and KEEP= data set options, but you must use the new name with the WHERE= data set option.)
Bias Estimation
The sample correlation is a plug-in estimator and hence is suitable for the bias estimator in %BOOT. The sample variance computed with a divisor of n-1 is not a plug-in estimator and therefore requires special treatment. In some procedures, you can use the VARDEF= option to obtain a plug-in estimate of the variance. The default value of VARDEF= is DF, which yields the usual adjustment for degrees of freedom, instead of the plug-in estimate. For example:
title2 'The unbiased variance estimator is not a plug-in estimator'; proc means data=law var vardef=df; var LSAT GPA; run;
The following %ANALYZE macro could be used to jackknife the unbiased variance estimator, but the bootstrap over-corrects for the nonexistent bias:
title2 'Estimating the bias of the unbiased estimator of variance'; %macro analyze(data=,out=); proc means noprint data=&data vardef=df; output out=&out(drop=_freq_ _type_) var=var_LSAT var_GPA; var LSAT GPA; %bystmt; run; %mend; title3 'The jackknife computes the correct bias of zero'; %jack(data=law) title3 'The bootstrap over-corrects for bias'; %boot(data=law,random=123)
By specifying VARDEF=N instead of VARDEF=DF, you can tell the MEANS procedure to compute a plug-in estimate of the variance:
title2 'Estimating the bias of the plug-in estimator of variance'; %macro analyze(data=,out=); proc means noprint data=&data vardef=n; output out=&out(drop=_freq_ _type_) var=var_LSAT var_GPA; var LSAT GPA; %bystmt; run; %mend;
With the above %ANALYZE macro, %JACK yields an exact bias correction, while the bias-corrected estimates from %BOOT are very close to the unbiased estimates:
title3 'Jacknife Analysis'; %jack(data=law) title3 'Bootstrap Analysis'; %boot(data=law,random=123)
If the procedure you are using supports the VARDEF= option to produce plug-in estimates, you can use the %VARDEF macro to obtain correct bootstrap bias estimates of the corresponding non-plug-in estimates. The %VARDEF macro generates a VARDEF= option with a value of either N or DF as appropriate for use with the %BOOT macro (The %JACK macro ignores the %VARDEF macro). In the %ANALYZE macro, use %VARDEF in the procedure statement where the VARDEF= option would be syntactically correct. For example:
title2 'Estimating the bias of the unbiased variance estimator'; %macro analyze(data=,out=); proc means noprint data=&data %vardef; output out=&out(drop=_freq_ _type_) var=var_LSAT var_GPA; var LSAT GPA; %bystmt; run; %mend; title3 'Bootstrap Analysis'; %boot(data=law,random=123)
The variance estimator using VARDEF=DF is unbiased, so the bias correction estimated by bootstrapping is much smaller than in the previous example, in which the biased plug-in estimator was used.
Confidence Intervals
The normal bootstrap confidence interval computed by %BOOT or %BOOTSE is accurate only for statistics with an approximately normal sampling distribution. The %BOOTCI macro provides the most commonly used types of bootstrap confidence intervals that:
You must run %BOOT before %BOOTCI, and it is advisable to specify at least 1000 resamples in %BOOT for a 90% confidence interval. For a higher level of confidence or for the BC and BCa methods, even more resamples should be used.
The terminology for bootstrap confidence intervals is confused. The keywords used with the %BOOTCI macro follow S&T:
Keyword Terms from the references ------- ------------------------- PCTL or "bootstrap percentile" in S&T; PERCENTILE "percentile" in E&T; "other percentile" in Hall; "Efron's ‘backwards’ pecentile" in Hjorth HYBRID "hybrid" in S&T; no term in E&T; "percentile" in Hall; "simple" in Hjorth T "bootstrap-t" in S&T and E&T; "percentile-t" in Hall; "studentized" in Hjorth BC "BC" in all BCA "BCa" in S&T, E&T, and Hjorth; "ABC" in Hall (cannot be used for bootstrapping residuals in regression models)
There is considerable controversy concerning the use of bootstrap confidence intervals. To fully appreciate the issues, it is important to read S&T and Hall in addition to E&T. Asymptotically in simple random samples, the T and BCa methods work better than the traditional normal approximation, while the percentile, hybrid, and BC methods have the same accuracy as the traditional normal approximation. In small samples, things get much more complicated:
Numerous other methods exist for bootstrap confidence intervals that require nested resampling, i.e., each resample of the original sample is itself reresampled multiple times. Since the total number of reresamples required is typically 25,000 or more, these methods are extremely expensive and have not yet been implemented in the %BOOT and %BOOTCI macros.
The following example replicates the nonparametric confidence intervals shown in E&T, p 183. This example analyzes the variances of two variables, A and B, while E&T analyze only A. E&T do not show the hybrid interval, the normal ("standard") interval with bias correction, or the jackknife interval.
title 'Spatial Test Data from Efron and Tibshirani, pp 180 & 183'; data spatial; input a b @@; cards; 48 42 36 33 20 16 29 39 42 38 42 36 20 15 42 33 22 20 41 43 45 34 14 22 6 7 0 15 33 34 28 29 34 41 4 13 32 38 24 25 47 27 41 41 24 28 26 14 30 28 41 40 ; %macro analyze(data=,out=); proc means noprint data=&data vardef=n; output out=&out(drop=_freq_ _type_) var=var_a var_b; var a b; %bystmt; run; %mend; title2 'Jackknife Interval with Bias Correction'; %jack(data=spatial,alpha=.10); title2 'Normal ("Standard") Confidence Interval with Bias Correction'; %boot(data=spatial,alpha=.10,samples=2000,random=123); title2 'Normal ("Standard") Confidence Interval without Bias Correction'; %bootse(alpha=.10,biascorr=0); title2 'Efron''s Percentile Confidence Interval'; %bootci(percentile,alpha=.10) title2 'Hybrid Confidence Interval'; %bootci(hybrid,alpha=.10) title2 'BC Confidence Interval'; %bootci(bc,alpha=.10) title2 'BCa Confidence Interval'; %bootci(bca,alpha=.10) title2 'Resampling with Computation of Studentizing Statistics'; %macro analyze(data=,out=); proc means noprint data=&data vardef=n; output out=&out(drop=_freq_ _type_) var=var_a var_b kurtosis=kurt_a kurt_b; var a b; %bystmt; run; data &out; set &out; stud_a=var_a*sqrt(kurt_a+2); stud_b=var_b*sqrt(kurt_b+2); drop kurt_a kurt_b; run; %mend; %boot(data=spatial,stat=var_a var_b,samples=2000,random=123); title2 'T Confidence Interval'; %bootci(t,stat=var_a var_b,student=stud_a stud_b,alpha=.10)
If you want to compute all the varieties of confidence intervals, you can use the %ALLCI macro:
title2 'All Jackknife and Bootstrap Confidence Intervals'; %allci(stat=var_a var_b,student=stud_a stud_b,alpha=.10)
Bootstrapping Regression Models
In regression models, there are two main ways to do bootstrap resampling, depending on whether the predictor variables are random or fixed. Stine provides an elementary introduction to bootstrapping regressions, including discussion of outliers, robust estimators, and heteroscedasticity.
If the predictors are random, you resample observations just as you would for any simple random sample. This method is usually called "bootstrapping pairs".
If the predictors are fixed, the resampling process should keep the same values of the predictors in every resample and change only the values of the response variable by resampling the residuals. To do this with the %BOOT macro, you must do a preliminary analysis in which you fit the regression model using the complete sample and create an output data set containing residuals and predicted values; it is this output data set that is used as input to the %BOOT macro. You must also specify the name of the residual variable and provide an equation for computing the response variable from the residual and predicted values.
title 'Cement Hardening Data from Hjorth, p 31'; data cement; input x1-x4 y; label x1='3CaOAl2O3' x2='3CaOSiO2' x3='4CaOAl2O3Fe2O3' x4='2CaOSiO2'; cards; 7 26 6 60 78.5 1 29 15 52 74.3 11 56 8 20 104.3 11 31 8 47 87.6 7 52 6 33 95.9 11 55 9 22 109.2 3 71 17 6 102.7 1 31 22 44 72.5 2 54 18 22 93.1 21 47 4 26 115.9 1 40 23 34 83.8 11 66 9 12 113.3 10 68 8 12 109.4 ; proc reg data=cement; model y=x1-x4; output out=cemout r=resid p=pred; run; %macro analyze(data=,out=); options nonotes; proc reg data=&data noprint outest=&out(drop=Y _IN_ _P_ _EDF_); model y=x1-x4/selection=rsquare start=4; %bystmt; run; options notes; %mend; title2 'Resampling Observations'; title3 '(bias correction for _RMSE_ is wrong)'; %boot(data=cement,random=123) title2 'Resampling Residuals'; title3 '(bias correction for _RMSE_ is wrong)'; %boot(data=cemout,residual=resid,equation=y=pred+resid,random=123)
Either method of resampling for regression models (observations or residuals) can be used regardless of the form of the error distribution. However, residuals should be resampled only if the errors are independent and identically distributed and if the functional form of the model is correct to within a reasonable approximation. If these assumptions are questionable, it is safer to resample observations.
In the above example, R2 is a plug-in estimator, so the bias correction is appropriate. The root mean squared error, _RMSE_, is not a plug-in estimator, so the bias correction for _RMSE_ is wrong. Unfortunately, the REG procedure does not support the VARDEF= option. _RMSE_ is not very biased, so you could choose to ignore the bias and run the %BOOTSE macro to compute the standard error without a bias correction:
title2 'Resampling Observations'; title3 'Without bias correction'; %bootse(stat=_rmse_,biascorr=0)
To get the proper bias correction for _RMSE_, you have to use a DATA step that checks the macro variable &VARDEF and unadjusts for degrees of freedom when &VARDEF=N. You must also invoke the %VARDEF macro, but since you don't want to generate a VARDEF= option in this case, just assign the value returned by %VARDEF to an unused macro variable:
%macro analyze(data=,out=); options nonotes; proc reg data=&data noprint outest=&out; model y=x1-x4/selection=rsquare start=4; %bystmt; run; %let junk=%vardef; data &out(drop=y _in_ _p_ _edf_); set &out; _mse_=_rmse_**2; %if &vardef=N %then %do; _mse_=_mse_*_edf_/(_edf_+_p_); _rmse_=sqrt(_mse_); %end; label _mse_='Mean Squared Error'; run; options notes; %mend; title2 'Resampling Observations'; %boot(data=cement,random=123)
Note that _MSE_ is an unbiased estimate, so its estimated bias is very small. _RMSE_ is slightly biased and thus has a larger estimated bias.
These sample files and code examples are provided by SAS Institute Inc. "as is" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose. Recipients acknowledge and agree that SAS Institute shall not be liable for any damages whatsoever arising out of their use of this material. In addition, SAS Institute will provide no support for the materials contained herein.