The QLIM Procedure

Output to SAS Data Set

XBeta, Predicted, Residual

Xbeta is the structural part on the right-hand side of the model. Predicted value is the predicted dependent variable value. For censored variables, if the predicted value is outside the boundaries, it is reported as the closest boundary. For discrete variables, it is the level whose boundaries Xbeta falls between. Residual is defined only for continuous variables and is defined as

\[  Residual = Observed - Predicted  \]

Error Standard Deviation

Error standard deviation is $\sigma _ i$ in the model. It varies only when the HETERO statement is used.

Marginal Effects

Marginal effect is defined as a contribution of one control variable to the response variable. For the binary choice model with two response categories, $\mu _{0}=-\infty $, $\mu _{1}=0$, $\mu _{2}=\infty $; and ordinal response model with $M$ response categories, $\mu _{0},\cdots ,\mu _{M}$, define

\[  R_{i,j} = \mu _{j} - \mb{x}_{i}’\bbeta  \]

The probability that the unobserved dependent variable is contained in the jth category can be written as

\[  P[\mu _{j-1}< y_{i}^{*} \leq \mu _{j}] = F(R_{i,j}) - F(R_{i,j-1})  \]

The marginal effect of changes in the regressors on the probability of $y_{i}=j$ is then

\[  \frac{\partial Prob[y_{i}=j]}{\partial \mb{x}} = [f(\mu _{j-1} - \mb{x}_{i}’\bbeta ) - f(\mu _{j} - \mb{x}_{i}’\bbeta )] \bbeta  \]

where $f(x) = \frac{d F(x)}{dx}$. In particular,

\[  f(x) = \frac{d F(x)}{dx} = \left\{  \begin{array}{ll} \frac{1}{\sqrt {2\pi }}e^{-x^2/2} &  \mr{(probit)} \\ \frac{e^{-x}}{[1+e^{(-x)}]^2} &  \mr{(logit)} \end{array} \right.  \]

The marginal effects in the Box-Cox regression model are

\[  \frac{\partial {E}[y_{i}]}{\partial \mb{x}} = \bbeta \frac{x^{\lambda _{k}-1}}{y^{\lambda _{0}-1}}  \]

The marginal effects in the truncated regression model are

\[  \frac{\partial {E}[y_{i}|L_{i} < y_{i}^{*} < R_{i}]}{\partial \mb{x}} = \bbeta \left[ 1 - \frac{(\phi (a_{i})-\phi (b_{i}))^2}{(\Phi (b_{i})-\Phi (a_{i}))^2} + \frac{a_{i}\phi (a_{i})-b_{i}\phi (b_{i})}{\Phi (b_{i})-\Phi (a_{i})} \right]  \]

where $a_{i}=\frac{L_{i}-\mb{x}_{i}'\bbeta }{\sigma _ i}$ and $b_{i}=\frac{R_{i}-\mb{x}_{i}'\bbeta }{\sigma _ i}$.

The marginal effects in the censored regression model are

\[  \frac{\partial {E}[y|\mb{x}_{i}]}{\partial \mb{x}} = \bbeta \times Prob[L_{i}<y_{i}^{*}<R_{i}]  \]

Inverse Mills Ratio, Expected and Conditionally Expected Values

Expected and conditionally expected values are computed only for continuous variables. The inverse Mills ratio is computed for censored or truncated continuous, binary discrete, and selection endogenous variables.

Let $L_ i$ and $R_ i$ be the lower boundary and upper boundary, respectively, for the $y_ i$. Define $a_{i}=\frac{L_{i}-\mb{x}_{i}'\bbeta }{\sigma _ i}$ and $b_{i}=\frac{R_{i}-\mb{x}_{i}'\bbeta }{\sigma _ i}$. Then the inverse Mills ratio is defined as

\[  \lambda = \frac{(\phi (a_{i})-\phi (b_{i}))}{(\Phi (b_{i})-\Phi (a_{i}))}  \]

for a continuous variable and defined as

\[  \lambda = \frac{\phi (\mb{x}_{i}'\bbeta )}{\Phi (\mb{x}_{i}'\bbeta )}  \]

for a binary discrete variable.

The expected value is the unconditional expectation of the dependent variable. For a censored variable, it is

\[  E[y_ i] = \Phi (a_{i}) L_{i} + (\mb{x}_{i}’\bbeta + \lambda \sigma _ i) (\Phi (b_{i})-\Phi (a_{i})) + (1-\Phi (b_{i})) R_{i}  \]

For a left-censored variable ($R_ i=\infty $), this formula is

\[  E[y_ i] = \Phi (a_{i}) L_{i} + (\mb{x}_{i}’\bbeta + \lambda \sigma _ i) (1-\Phi (a_{i}))  \]

where $\lambda = \frac{\phi (a_{i})}{1-\Phi (a_{i})}$.

For a right-censored variable ($L_ i=-\infty $), this formula is

\[  E[y_ i] = (\mb{x}_{i}’\bbeta + \lambda \sigma _ i) \Phi (b_{i}) + (1-\Phi (b_{i})) R_{i}  \]

where $\lambda = -\frac{\phi (b_{i})}{\Phi (b_{i})}$.

For a noncensored variable, this formula is

\[  E[y_ i] = \mb{x}_{i}’\bbeta  \]

The conditional expected value is the expectation given that the variable is inside the boundaries:

\[  E[y_ i| L_ i< y_ i < R_ i] = \mb{x}_{i}’\bbeta + \lambda \sigma _ i  \]

Probability

Probability applies only to discrete responses. It is the marginal probability that the discrete response is taking the value of the observation. If the PROBALL option is specified, then the probability for all of the possible responses of the discrete variables is computed.

Technical Efficiency

Technical efficiency for each producer is computed only for stochastic frontier models.

In general, the stochastic production frontier can be written as

\[  y_ i=f(x_ i;\beta )\exp \{ v_ i\} TE_ i  \]

where $y_ i$ denotes producer i’s actual output, $f(\cdot )$ is the deterministic part of production frontier, $\exp \{ v_ i\} $ is a producer-specific error term, and $TE_ i$ is the technical efficiency coefficient, which can be written as

\[  TE_ i=\frac{y_ i}{f(x_ i;\beta )\exp \{ v_ i\} }.  \]

In the case of a Cobb-Douglas production function, $TE_ i=\exp \{ -u_ i\} $. See the section Stochastic Frontier Production and Cost Models.

Cost frontier can be written in general as

\[  E_ i=c(y_ i,w_ i;\beta )\exp \{ v_ i\} /CE_ i  \]

where $w_ i$ denotes producer i’s input prices, $c(\cdot )$ is the deterministic part of cost frontier, $\exp \{ v_ i\} $ is a producer-specific error term, and $CE_ i$ is the cost efficiency coefficient, which can be written as

\[  CE_ i=\frac{c(x_ i,w_ i;\beta )\exp \{ v_ i\} }{E_ i}  \]

In the case of a Cobb-Douglas cost function, $CE_ i=\exp \{ -u_ i\} $. See the section Stochastic Frontier Production and Cost Models. Hence, both technical and cost efficiency coefficients are the same. The estimates of technical efficiency are provided in the following subsections.

Normal-Half Normal Model

Define $\mu _*=-\epsilon \sigma _ u^2/\sigma ^2$ and $\sigma _*^2=\sigma _ u^2\sigma _ v^2/\sigma ^2$. Then, as it is shown by Jondrow et al. (1982), conditional density is as follows:

\[  f(u|\epsilon )=\frac{f(u,\epsilon )}{f(\epsilon )} =\frac{1}{\sqrt {2\pi }\sigma _*}\exp \left\{ -\frac{(u-\mu _*)^2}{2\sigma _*^2}\right\}  \bigg/\left[1-\Phi \left(-\frac{\mu _*}{\sigma _*}\right)\right]  \]

Hence, $f(u|\epsilon )$ is the density for $N^+(\mu _*,\sigma _*^2)$.

Using this result, it follows that the estimate of technical efficiency (Battese and Coelli, 1988) is

\[  TE1_ i=E(\exp \{ -u_ i\} |\epsilon _ i)=\left[\frac{1-\Phi (\sigma _*-\mu _{*i}/\sigma _*)}{1-\Phi (-\mu _{*i}/\sigma _*)}\right]\exp \left\{ -\mu _{*i}+\frac{1}{2}\sigma _*^2\right\}   \]

The second version of the estimate (Jondrow et al., 1982) is

\[  TE2_ i=\exp \{ -E(u_ i|\epsilon _ i)\}   \]

where

\[  E(u_ i|\epsilon _ i)=\mu _{*i}+\sigma _*\left[\frac{\phi (-\mu _{*i}/\sigma _*)}{1-\Phi (-\mu _{*i}/\sigma _*)}\right]=\sigma _*\left[\frac{\phi (\epsilon _ i\lambda /\sigma )}{1-\Phi (\epsilon _ i\lambda /\sigma )}-\left(\frac{\epsilon _ i\lambda }{\sigma }\right)\right]  \]

Normal-Exponential Model

Define $ A=-\tilde{\mu }/\sigma _ v$ and $\tilde{\mu }=-\epsilon -\sigma _ v^2/\sigma _ u$. Then, as it is shown by Kumbhakar and Lovell (2000), conditional density is as follows:

\[  f(u|\epsilon ) =\frac{1}{\sqrt {2\pi }\sigma _ v\Phi (-\tilde{\mu }/\sigma _ v)}\exp \left\{ -\frac{(u-\tilde{\mu })^2}{2\sigma ^2}\right\}   \]

Hence, $f(u|\epsilon )$ is the density for $N^+(\tilde{\mu },\sigma _ v^2)$.

Using this result, it follows that the estimate of technical efficiency is

\[  TE1_ i=E(\exp \{ -u_ i\} |\epsilon _ i)=\left[\frac{1-\Phi (\sigma _ v-\tilde{\mu }_ i/\sigma _ v)}{1-\Phi (-\tilde{\mu }_ i/\sigma _ v)}\right]\exp \left\{ -\tilde{\mu }_ i+\frac{1}{2}\sigma _ v^2\right\}   \]

The second version of the estimate is

\[  TE2_ i=\exp \{ -E(u_ i|\epsilon _ i)\}   \]

where

\[  E(u_ i|\epsilon _ i)=\tilde{\mu }_ i+\sigma _ v\left[\frac{\phi (-\tilde{\mu }_ i/\sigma _ v)}{1-\Phi (-\tilde{\mu }_ i/\sigma _ v)}\right]=\sigma _ v\left[\frac{\phi (A)}{\Phi (-A)}-A\right]  \]

Normal-Truncated Normal Model

Define $\tilde{\mu }=(-\sigma _ u^2\epsilon _ i+\mu \sigma _ v^2)/\sigma ^2$ and $\sigma _*^2=\sigma _ u^2\sigma _ v^2/\sigma ^2$. Then, as it is shown by Kumbhakar and Lovell (2000), conditional density is as follows:

\[  f(u|\epsilon )= \frac{1}{\sqrt {2\pi }\sigma _*[1-\Phi (-\tilde{\mu }/\sigma _*)]}\exp \left\{ -\frac{(u-\tilde{\mu })^2}{2\sigma _*^2}\right\}   \]

Hence, $f(u|\epsilon )$ is the density for $N^+(\tilde{\mu },\sigma _*^2)$.

Using this result, it follows that the estimate of technical efficiency is

\[  TE1_ i=E(\exp \{ -u_ i\} |\epsilon _ i)=\frac{1-\Phi (\sigma _*-\tilde{\mu }_ i/\sigma _*)}{1-\Phi (-\tilde{\mu }_ i/\sigma _*)}\exp \left\{ -\tilde{\mu }_ i+\frac{1}{2}\sigma _*^2\right\}   \]

The second version of the estimate is

\[  TE2_ i=\exp \{ -E(u_ i|\epsilon _ i)\}   \]

where

\[  E(u_ i|\epsilon _ i)=\tilde{\mu }_ i+\sigma _*\left[\frac{\phi (\tilde{\mu }_ i/\sigma _*)}{1-\Phi (-\tilde{\mu }_ i/\sigma _*)}\right]  \]