The SYSLIN Procedure

Computational Details

This section discusses various computational details.

Computation of Least Squares-Based Estimators

Let the system be composed of $G$ equations and let the ith equation be expressed in this form:

\[  y_ i = Y_ i \bbeta _ i + X_ i \bgamma _ i + \mb{u}  \]

where

$y_ i$

is the vector of observations on the dependent variable

$Y_ i$

is the matrix of observations on the endogenous variables included in the equation

$\bbeta _ i$

is the vector of parameters associated with $Y_ i$

$X_ i$

is the matrix of observations on the predetermined variables included in the equation

$\bgamma _ i$

is the vector of parameters associated with $X_ i$

$\mb{u}$

is a vector of errors

Let $\hat{V}_ i=Y_ i-\hat{Y}_ i$, where $\hat{Y}_ i$ is the projection of $Y_ i$ onto the space spanned by the instruments matrix Z.

Let

\[  \bdelta _ i \:  =\:  \left[ \begin{array}{c} \bbeta _{i}\\ \bgamma _{i} \end{array} \right]  \]

be the vector of parameters associated with both the endogenous and exogenous variables.

The K-class of estimators (Theil, 1971) is defined by

\[  \hat{ \bdelta }_{i,k} \:  = \:  \left[ \begin{array}{cc} Y_ i^\prime Y_ i - k \hat{V}_ i^\prime \hat{V}_ i &  Y_ i^\prime X_ i \\ X_ i^\prime Y_ i &  X_ i^\prime X_ i \end{array} \right]^{-1} \left[ \begin{array}{c} (Y_ i - kV_ i)^\prime y_ i \\ X_ i^\prime y_ i \end{array} \right]  \]

where k is a user-defined value.

Let

\[  \bR \:  = \:  [ Y_ i X_ i ]  \]

and

\[  \hat{\bR } \:  = \:  [ \hat{Y}_ i\   X_ i ]  \]

The 2SLS estimator is defined as

\[  \hat{\bdelta }_{i, 2SLS} = [ \hat{R}_ i^\prime \:  \hat{R}_ i ]^{-1} \hat{R}_ i^\prime y_ i  \]

Let $\mb{y}$ and $\bdelta $ be the vectors obtained by stacking the vectors of dependent variables and parameters for all $G$ equations, and let $\bR $ and $\hat{\bR }$ be the block diagonal matrices formed by $R_ i$ and $\hat{R}_ i$, respectively.

The SUR and ITSUR estimators are defined as

\[  \hat{\bdelta }_{(IT)SUR} = \left[ \bR ^\prime \left(\hat{\Sigma }^{-1} \otimes \bI \right) \bR \right]^{-1} \bR ^\prime \left(\hat{\Sigma }^{-1} \otimes \bI \right)\mb{y}  \]

while the 3SLS and IT3SLS estimators are defined as

\[  \hat{\bdelta }_{(IT)3SLS} = \left[ \hat{\bR }^\prime \left(\hat{\Sigma }^{-1} \otimes \bI \right) \hat{\bR } \right]^{-1} \hat{\bR }^\prime \left(\hat{\Sigma }^{-1} \otimes \bI \right)\mb{y}  \]

where $\bI $ is the identity matrix, and $\hat{\Sigma }$ is an estimator of the cross-equation correlation matrix. For 3SLS, $\hat{\Sigma }$ is obtained from the 2SLS estimation, while for SUR it is derived from the OLS estimation. For IT3SLS and ITSUR, it is obtained iteratively from the previous estimation step, until convergence.

Computation of Standard Errors

The VARDEF= option in the PROC SYSLIN statement controls the denominator used in calculating the cross-equation covariance estimates and the parameter standard errors and covariances. The values of the VARDEF= option and the resulting denominator are as follows:

N

uses the number of nonmissing observations.

DF

uses the number of nonmissing observations less the degrees of freedom in the model.

WEIGHT

uses the sum of the observation weights given by the WEIGHTS statement.

WDF

uses the sum of the observation weights given by the WEIGHTS statement less the degrees of freedom in the model.

The VARDEF= option does not affect the model mean squared error, root mean squared error, or $R^{2}$ statistics. These statistics are always based on the error degrees of freedom, regardless of the VARDEF= option. The VARDEF= option also does not affect the dependent variable coefficient of variation (CV).

Reduced Form Estimates

The REDUCED option in the PROC SYSLIN statement computes estimates of the reduced form coefficients. The REDUCED option requires that the equation system be square. If there are fewer models than endogenous variables, IDENTITY statements can be used to complete the equation system.

The reduced form coefficients are computed as follows. Represent the equation system, with all endogenous variables moved to the left-hand side of the equations and identities, as

\[  \mb{B Y} = \bGamma \mb{X}  \]

Here B is the estimated coefficient matrix for the endogenous variables Y, and $\bGamma $ is the estimated coefficient matrix for the exogenous (or predetermined) variables X.

The system can be solved for Y as follows, provided B is square and nonsingular:

\[  \mb{Y} = \mb{B} ^{-1} \bGamma \mb{X}  \]

The reduced form coefficients are the matrix $\mb{B}^{-1} \bGamma $.

Uncorrelated Errors across Equations

The SDIAG option in the PROC SYSLIN statement computes estimates by assuming uncorrelated errors across equations. As a result, when the SDIAG option is used, the 3SLS estimates are identical to 2SLS estimates, and the SUR estimates are the same as the OLS estimates.

Overidentification Restrictions

The OVERID option in the MODEL statement can be used to test for overidentifying restrictions on parameters of each equation. The null hypothesis is that the predetermined variables that do not appear in any equation have zero coefficients. The alternative hypothesis is that at least one of the assumed zero coefficients is nonzero. The test is approximate and rejects the null hypothesis too frequently for small sample sizes.

The formula for the test is given as follows. Let ${y_{i} = {\beta }_{i}\mb{Y} _{i} + {\gamma }_{i}\mb{Z} _{i} + e_{i}}$ be the i th equation. ${\mb{Y} _{i}}$ are the endogenous variables that appear as regressors in the i th equation, and ${\mb{Z} _{i}}$ are the instrumental variables that appear as regressors in the i th equation. Let ${N_{i}}$ be the number of variables in ${\mb{Y} _{i}}$ and ${\mb{Z} _{i}}$.

Let ${v_{i} = y_{i}-\mb{Y} _{i}\hat{{\beta }_{i}}}$. Let Z represent all instrumental variables, T be the total number of observations, and K be the total number of instrumental variables. Define ${\hat{l}}$ as follows:

\[  \hat{l} = \frac{{v’}_{i}(\mb{I} -\mb{Z}_{i}({\mb{Z} ’}_{i}\mb{Z} _{i})^{-1}{\mb{Z} ’}_{i})v_{i}}{{v’}_{i}(\mb{I} -\mb{Z} ({\mb{Z} ’}\mb{Z} )^{-1}{\mb{Z} ’})v_{i} }  \]

Then the test statistic

\[  \frac{T-K}{K-N_{i}} ( \hat{l} - 1 )  \]

is distributed approximately as an F with ${K-N_{i}}$ and ${T-K}$ degrees of freedom. See Basmann (1960) for more information.

Fuller’s Modification to LIML

The ALPHA= option in the PROC SYSLIN and MODEL statements parameterizes Fuller’s modification to LIML. This modification is ${k={\gamma }-({\alpha }/(n-g))}$, where ${\alpha }$ is the value of the ALPHA= option, ${\gamma }$ is the LIML k value,n is the number of observations, and g is the number of predetermined variables. Fuller’s modification is not used unless the ALPHA= option is specified. See Fuller (1977) for more information.