Syntax: ESTIMATE Statement 
The basic element of the ESTIMATE statement is the estimatespecification, which consists of model effects and their coefficients. A estimatespecification takes the general form
The following variables can appear in the ESTIMATE statement:
is an optional label that identifies the particular row of the estimate in the output.
identifies an effect that appears in the MODEL statement. The keyword INTERCEPT can be used as an effect when an intercept is fitted in the model. You do not need to include all effects that are in the MODEL statement.
are constants that are elements of the matrix and are associated with the fixed and random effects. There are two basic methods of specifying the entries of the matrix. The traditional representation—also known as the positional syntax—relies on entering coefficients in the position they assume in the matrix. For example, in the following statements the elements of that are associated with the b main effect receive a in the first position and a in the second position:
class a b; model y = a b a*b; estimate 'B at A2' b 1 1 a*b 0 0 1 1;
The elements that are associated with the interaction receive a in the third position and a in the fourth position. In order to specify coefficients correctly for the interaction term, you need to know how the levels of a and b vary in the interaction, which is governed by the order of the variables in the CLASS statement. The nonpositional syntax is designed to make it easier to enter coefficients for interactions and is necessary to enter coefficients for effects that are constructed with the EFFECT statement. In square brackets you enter the coefficient followed by the associated levels of the CLASS variables. If B has two levels and A has three levels, the previous ESTIMATE statement, by using nonpositional syntax for the interaction term, becomes the following statement:
estimate 'B at A2' b 1 1 a*b [1, 2 1] [1, 2 2];
The previous statement assigns value to the interaction where A is at level 2 and B is at level 1, and it assigns to the interaction where both classification variables are at level 2. The comma that separates the entry for the matrix from the level indicators is optional. Further details about the nonpositional contrast syntax and its use with constructed effects can be found in the section Positional and Nonpositional Syntax for Coefficients in Linear Functions.
Based on the estimatespecifications in your ESTIMATE statement, the procedure constructs the matrix to test the hypothesis . The procedure supports nonpositional syntax for the coefficients of model effects in the ESTIMATE statement. For details see the section Positional and Nonpositional Syntax for Coefficients in Linear Functions.
The procedure then produces for each row of an approximate t test of the hypothesis . You can also obtain multiplicityadjusted pvalues and confidence limits for multirow estimates with the ADJUST= option.
Note that multirow estimates are permitted. Unlike releases prior to SAS 9.22, you do not need to specify a 'label' for every row of the estimate; the procedure constructs a default label if a label is not specified.
If the procedure finds the estimate to be nonestimable, then it displays "Nonest" for the estimate entry.
Table 19.17 summarizes important options in the ESTIMATE statement. All ESTIMATE options are subsequently discussed in alphabetical order.
Option 
Description 

Construction and Computation of Estimable Functions 

Specifies a list of values to divide the coefficients 

Suppresses the automatic fillin of coefficients for higherorder effects 

Tunes the estimability checking difference 

Degrees of Freedom and pvalues 

Determines the method for multiple comparison adjustment of estimates 

Determines the confidence level () 

Performs onesided, lowertailed inference 

Adjusts multiplicitycorrected pvalues further in a stepdown fashion 

Specifies values under the null hypothesis for tests 

Performs onesided, uppertailed inference 

Statistical Output 

Constructs confidence limits 

Displays the correlation matrix of estimates 

Displays the covariance matrix of estimates 

Prints the matrix 

Produces a joint or chisquare test for the estimable functions 

Requests ODS statistical graphics if the analysis is samplingbased 

Specifies the seed for computations that depend on random numbers 

Generalized Linear Modeling 

Specifies how to construct estimable functions with multinomial data 

Exponentiates and displays estimates 

Computes and displays estimates and standard errors on the inverse linked scale 
You can specify the following options in the ESTIMATE statement after a slash (/).
specifies how denominator degrees of freedom are determined when pvalues and confidence limits are adjusted for multiple comparisons with the ADJUST= option. When you do not specify the ADJDFE= option, or when you specify ADJDFE=SOURCE, the denominator degrees of freedom for multiplicityadjusted results are the denominator degrees of freedom for the final effect that is listed in the ESTIMATE statement from the "Type III" table.
The ADJDFE=ROW setting is useful if you want multiplicity adjustments to take into account that denominator degrees of freedom are not constant across estimates. For example, this can be the case when the denominator degrees of freedom are computed by the Satterthwaite method or according to Kenward and Roger (1997).
The ADJDFE= option has an effect only in mixed models that use these degreeoffreedom methods. It is not supported by the procedures that perform chisquarebased inference (LOGISTIC, PHREG, and SURVEYLOGISTIC).
requests a multiple comparison adjustment for the pvalues and confidence limits for the estimates. The adjusted quantities are produced in addition to the unadjusted quantities. Adjusted confidence limits are produced if the CL or ALPHA= option is in effect. For a description of the adjustments, see Chapter 41, The GLM Procedure, and Chapter 60, The MULTTEST Procedure, and the documentation for the ADJUST= option in the LSMEANS statement.
If the STEPDOWN option is in effect, the pvalues are further adjusted in a stepdown fashion.
requests that a t type confidence interval be constructed with confidence level . The value of number must be between 0 and 1; the default is 0.05. If the "Estimates" table shows infinite degrees of freedom, then the confidence interval is a z type interval.
specifies how to construct estimates and multiplicity corrections for models with multinomial data (ordinal or nominal). This option is also important for constructing sets of estimable functions for F or chisquare tests with the JOINT option.
computes the estimable functions for every nonredundant category and treats them as a set. For example, a threerow ESTIMATE statement in a model with three response categories leads to six estimable functions.
computes the estimable functions for every nonredundant category in turn. For example, a threerow ESTIMATE statement in a model with three response categories leads to two sets of three estimable functions.
computes the estimable functions only for the list of values given. The list must consist of formatted values of the response categories.
Consider the following ESTIMATE statements in the LOGISTIC procedure for an ordinal model with response categories 'vg', 'g', 'm', 'b', and 'vb'. Because there are five response categories, there are four nonredundant categories for the cumulative link model.
proc logistic data=icecream; class brand / param=glm; model taste(order=data) = brand / link=logit; freq count; estimate brand 1 1, intercept 1 brand 0 1 / category='m','vg'; estimate intercept 1 brand 1 / category=joint adjust=simulate(seed=1); estimate brand 1 1, brand 1 1 2 / category=separate adjust=bon; run;
The first ESTIMATE statement requests a tworow estimable function. The result is produced for two of the four nonredundant response categories. The second ESTIMATE statement produces four t tests, one for each nonredundant category. The multiplicity adjustment with pvalue computation by simulation treats the four estimable functions as a unit for familywise Type I error protection. The third ESTIMATE statement computes a tworow estimable function and reports its results separately for all nonredundant categories. The Bonferroni adjustment in this statement applies to a family of two tests that correspond to the tworow estimable function. Four Bonferroni adjustments for sets of size two are performed.
The CATEGORY= option is supported only by the procedures that support generalized linear modeling (LOGISTIC and SURVEYLOGISTIC) and by PROC PLM when it is used to perform statistical analyses on item stores created by these procedures.
requests that chisquare tests be performed in addition to F tests, when you request an F test with the JOINT option. This option has no effect in procedures that produce chisquare statistics by default.
requests that t type confidence limits be constructed. If the procedure shows the degrees of freedom in the "Estimates" table as infinite, then the confidence limits are z intervals. The confidence level is 0.95 by default, and you can change the confidence level with the ALPHA= option. The confidence intervals are adjusted for multiplicity when you specify the ADJUST= option. However, if a stepdown pvalue adjustment is requested with the STEPDOWN option, only the pvalues are adjusted for multiplicity.
displays the estimated correlation matrix of the linear combination of the parameter estimates.
displays the estimated covariance matrix of the linear combination of the parameter estimates.
specifies the degrees of freedom for the t test and confidence limits. This option is not supported by the procedures that perform chisquarebased inference (LOGISTIC, PHREG, and SUVEYLOGISTIC).
specifies a list of values by which to divide the coefficients so that fractional coefficients can be entered as integer numerators. If you do not specify valuelist, a default value of 1.0 is assumed. Missing values in the valuelist are converted to 1.0.
If the number of elements in valuelist exceeds the number of rows of the estimate, the extra values are ignored. If the number of elements in valuelist is less than the number of rows of the estimate, the last value in valuelist is copied forward.
If you specify a rowspecific divisor as part of the specification of the estimate row, this value multiplies the corresponding divisor that is implied by the valuelist. For example, the following statement divides the coefficients in the first row by 8, and the coefficients in the third and fourth row by 3:
estimate 'One vs. two' A 2 2 (divisor=2), 'One vs. three' A 1 0 1 , 'One vs. four' A 3 0 0 3 , 'One vs. five' A 1 0 0 0 1 / divisor=4,.,3;
Coefficients in the second row are not altered.
requests exponentiation of the estimate. When you model data with the logit, cumulative logit, or generalized logit link functions, and the estimate represents a log odds ratio or log cumulative odds ratio, the EXP option produces an odds ratio. In proportional hazards model, this option produces estimates of hazard ratios. If you specify the CL or ALPHA= option, the (adjusted) confidence bounds are also exponentiated.
The EXP option is supported only by PROC PHREG, PROC SURVEYPHREG, the procedures that support generalized linear modeling (LOGISTIC and SURVEYLOGISTIC), and by PROC PLM when it is used to perform statistical analyses on item stores created by these procedures.
requests that the estimate and its standard error also be reported on the scale of the mean (the inverse linked scale). The computation of the inverse linked estimate depends on the estimation mode. For example, if the analysis is based on a posterior sample when a BAYES statement is present, the inversely linked estimate is the average of the inversely linked values across the sample of posterior parameter estimates. If the analysis is not based on a sample of parameter estimates, the procedure computes the value on the mean scale by applying the inverse link to the estimate.
The interpretation of this quantity depends on the effect values specified in your ESTIMATE statement and on the link function. For example, in a model for binary data with logit link the following statements compute
where and are the fixedeffects solutions that are associated with the first two levels of the classification effect A:
class A; model y = A / dist=binary link=logit; estimate 'A one vs. two' A 1 1 / ilink;
This quantity is not the difference of the probabilities that are associated with the two levels,
The standard error of the inversely linked estimate is based on the delta method. If you also specify the CL option, the procedure computes confidence limits for the estimate on the mean scale. In multinomial models for nominal data, the limits are obtained by the delta method. In other models they are obtained from the inverse link transformation of the confidence limits for the estimate. The ILINK option is specific to an ESTIMATE statement.
The ILINK option is supported only by the procedures that support generalized linear modeling (LOGISTIC and SURVEYLOGISTIC) and by PROC PLM when it is used to perform statistical analyses on item stores created by these procedures.
requests that a joint F or chisquare test be produced for the rows of the estimate. The JOINT option in the ESTIMATE statement essentially replaces the CONTRAST statement.
When the LOWERTAILED or the UPPERTAILED options are in effect, or if the BOUNDS option described below is in effect, the JOINT option produces the chibarsquare statistic according to Silvapulle and Sen (2004). This statistic uses a simulationbased approach to compute pvalues in situations where the alternative hypotheses of the estimable functions are not simple twosided hypotheses. See the section Joint Hypothesis Tests with Complex Alternatives, the ChiBarSquare Statistic for more information about this test statistic.
You can specify the following jointtestoptions in parentheses:
specifies the accuracy radius for determining the necessary sample size in the simulationbased approach of Silvapulle and Sen (2004) for tests with order restrictions. The value of must be strictly between 0 and 1; the default value is 0.005.
specifies the accuracy confidence level for determining the necessary sample size in the simulationbased approach of Silvapulle and Sen (2004) for tests with order restrictions. The value of must be strictly between 0 and 1; the default value is 0.01.
assigns an identifying label to the joint test. If you do not specify a label, the first nondefault label for the ESTIMATE rows is used to label the joint test.
performs only the F or chisquare test and suppresses other results from the ESTIMATE statement. This option is useful for emulating the CONTRAST statement that is available in other procedures.
specifies the number of samples for the simulationbased method of Silvapulle and Sen (2004). If n is not specified, it is constructed from the values of the ALPHA=, the ACC=, and the EPS= options. With the default values for , , and (0.005, 0.01, and 0.05, respectively), NSAMP=12,604 by default.
adds a chisquare test if the procedure produces an F test by default.
specifies boundary values for the estimable linear function. The null value of the hypothesis is always zero. If you specify a positive boundary value , the hypotheses are , with the added constraint that . The same is true for negative boundary values. The alternative hypothesis is then subject to the constraint . If you specify a missing value, the hypothesis is assumed to be twosided. The BOUNDS option enables you to specify sets of one and twosided joint hypotheses. If all values in valuelist are set to missing, the procedure performs a simulationbased pvalue calculation for a twosided test.
requests that the pvalue for the t test be based only on values that are less than the test statistic. A twotailed test is the default. A lowertailed confidence limit is also produced if you specify the CL or ALPHA= option.
Note that for ADJUST=SCHEFFE the onesided adjusted confidence intervals and onesided adjusted pvalues are the same as the corresponding twosided statistics, because this adjustment is based on only the right tail of the distribution.
If you request a joint test with the JOINT option, then a onesided lefttailed order restriction is applied to all estimable functions, and the corresponding chibarsquare statistic of Silvapulle and Sen (2004) is computed in addition to the twosided, standard, F or chisquare statistic. See the JOINT option for how to control the computation of the simulationbased chibarsquare statistic.
suppresses the automatic fillin of coefficients of higherorder effects.
produces ODS statistical graphics of the distribution of estimable functions if the procedure performs the analysis in a samplingbased mode. For example, this is the case when procedures support a BAYES statement and perform a Bayesian analysis. The estimable functions are then computed for each of the posterior parameter estimates, and the "Estimates" table reports simple descriptive statistics for the evaluated functions. The PLOTS= option enables you in this situation to visualize the distribution of the estimable function. The following plotoptions are available:
produces all possible plots with their default settings.
produces box plots of the distribution of the estimable function across the posterior sample. A separate box is generated for each estimable function, and all boxes appear on a single graph by default. You can affect the appearance of the box plot graph with the following options:
specifies the orientation of the boxes. The default is vertical orientation of the box plots.
specifies how to break the series of box plots across multiple panels. If the NPANELPOS option is not specified, or if number equals zero, then all box plots are displayed in a single graph; this is the default. If a negative number is specified, then exactly up to number of box plots are displayed per panel. If number is positive, then the number of boxes per panel is balanced to achieve small variation in the number of box plots per graph.
generates panels of histograms with a kernel density overlaid. A separate plot in each panel contains the results for each estimable function. You can specify the following distplotoptions in parentheses:
controls the display of a horizontal box plot of the estimable function’s distribution across the posterior sample below the graph. The BOX option is enabled by default.
controls the display of the histogram of the estimable function’s distribution across the posterior sample. The HIST option is enabled by default.
controls the display of a normal density estimate on the graph. The NONORMAL option is enabled by default.
controls the display of a kernel density estimate on the graph. The KERNEL option is enabled by default.
specifies the highest number of rows in a panel. The default is 3.
specifies the highest number of columns in a panel. The default is 3.
unpacks the panel into separate graphics.
does not produce any plots.
specifies the seed for the samplingbased components of the computations for the ESTIMATE statement (for example, chibarsquare statistics and simulated pvalues). number specifies an integer that is used to start the pseudorandom number generator for the simulation. If you do not specify a seed, or if you specify a value less than or equal to zero, the seed is generated from reading the time of day from the computer clock. There could be multiple ESTIMATE statements with SEED= specifications and there could be other statements that can supply a random number seed. Since the procedure has only one random number stream, the initial seed is shown in the SAS log.
tunes the estimability checking. If is a vector, define ABS() to be the largest absolute value of the elements of . If ABS() is greater than *number for any row of in the contrast, then is declared nonestimable. Here, is the Hermite form matrix , and is ABS(), except when it equals 0, and then is 1. The value for number must be between 0 and 1; the default is 1E4.
requests that multiplicity adjustments for the pvalues of estimates be further adjusted in a stepdown fashion. Stepdown methods increase the power of multiple testing procedures by taking advantage of the fact that a pvalue is never declared significant unless all smaller pvalues are also declared significant. The STEPDOWN adjustment combined with ADJUST=BON corresponds to the methods of Holm (1979) and "Method 2" of Shaffer (1986); this is the default. Using stepdownadjusted pvalues combined with ADJUST=SIMULATE corresponds to the method of Westfall (1997).
If the ESTIMATE statement is applied with a STEPDOWN option in a mixed model where the degreesoffreedom method is that of Kenward and Roger (1997) or of Satterthwaite, then stepdownadjusted pvalues are produced only if the ADJDFE=ROW option is in effect.
Also, the STEPDOWN option affects only pvalues, not confidence limits. For ADJUST=SIMULATE, the generalized least squares hybrid approach of Westfall (1997) is used to increase Monte Carlo accuracy. You can specify the following stepdownoptions in parentheses after the STEPDOWN option:
specifies the time (in seconds) to be spent computing the maximal logically consistent sequential subsets of equality hypotheses for TYPE=LOGICAL. The default is MAXTIME=60. If the MAXTIME value is exceeded, the adjusted tests are not computed. When this occurs, you can try increasing the MAXTIME value. However, note that there are common multiple comparisons problems for which this computation requires a huge amount of time—for example, all pairwise comparisons between more than 10 groups. In such cases, try to use TYPE=FREE (the default) or TYPE=LOGICAL() for small .
specifies the order in which the stepdown tests to be performed. ORDER=PVALUE is the default, with estimates being declared significant only if all estimates with smaller (unadjusted) pvalues are significant. If you specify ORDER=ROWS, then significances are evaluated in the order in which they are specified in the syntax.
specifies that a report on the stepdown adjustment be displayed, including a listing of the sequential subsets (Westfall 1997) and, for ADJUST=SIMULATE, the stepdown simulation results.
specifies how stepdown adjustment are made. If you specify TYPE=LOGICAL, the stepdown adjustments are computed by using maximal logically consistent sequential subsets of equality hypotheses (Shaffer 1986, Westfall 1997). Alternatively, for TYPE=FREE, sequential subsets are computed ignoring logical constraints. The TYPE=FREE results are more conservative than those for TYPE=LOGICAL, but they can be much more efficient to produce for many estimates. For example, it is not feasible to take logical constraints between all pairwise comparisons of more than about 10 groups. For this reason, TYPE=FREE is the default.
However, you can reduce the computational complexity of taking logical constraints into account by limiting the depth of the search tree used to compute them, specifying the optional depth parameter as a number in parentheses after TYPE=LOGICAL. As with TYPE=FREE, results for TYPE=LOGICAL() are conservative relative to the true TYPE=LOGICAL results. But even for TYPE=LOGICAL() they can be appreciably less conservative than TYPE=FREE, and they are computationally feasible for much larger numbers of estimates. If you do not specify or if , the full search tree is used.
specifies the value under the null hypothesis for testing the estimable functions in the ESTIMATE statement. The rules for specifying the valuelist are very similar to those for specifying the divisor list in the DIVISOR= option. If no TESTVALUE= is specified, all tests are performed as . Missing values in the valuelist also are translated to zeros. If you specify fewer values than rows in the ESTIMATE statement, the last value in valuelist is carried forward.
The TESTVALUE= option affects only values from individual, joint, and multiplicityadjusted tests. It does not affect confidence intervals.
The TESTVALUE option is not available for the multinomial distribution, and the values are ignored when you perform a samplingbased (Bayesian) analysis.
requests that the pvalue for the t test be based only on values that are greater than the test statistic. A twotailed test is the default. An uppertailed confidence limit is also produced if you specify the CL or ALPHA= option.
Note that for ADJUST=SCHEFFE the onesided adjusted confidence intervals and onesided adjusted pvalues are the same as the corresponding twosided statistics, because this adjustment is based on only the right tail of the distribution.
If you request a joint test with the JOINT option, then a onesided righttailed order restriction is applied to all estimable functions, and the corresponding chibarsquare statistic of Silvapulle and Sen (2004) is computed in addition to the twosided, standard, F or chisquare statistic. See the JOINT option for how to control the computation of the simulationbased chibarsquare statistic.