The PANEL Procedure

Tests for Serial Correlation and Cross-Sectional Effects

The presence of cross-sectional effects causes serial correlation in the errors. Therefore, serial correlation is often tested jointly with cross-sectional effects. Joint and conditional tests for both serial correlation and cross-sectional effects have been covered extensively in the literature.

Baltagi and Li Joint LM Test for Serial Correlation and Random Cross-Sectional Effects

Baltagi and Li (1991) derive the LM test statistic, which jointly tests for zero first-order serial correlation and random cross-sectional effects under normality and homoscedasticity. The test statistic is independent of the form of serial correlation, so it can be used with either AR$(1)$ or MA$(1)$ error terms. The null hypothesis is a white noise component: $H_{0}^{1}: \sigma _{\gamma }^{2}=0,\theta =0$ for MA$(1)$ with MA coefficient $\theta $ or $H_{0}^{2}: \sigma _{\gamma }^{2}=0,\rho =0$ for AR$(1)$ with AR coefficient $\rho $. The alternative is either a one-way random-effects model (cross-sectional) or first-order serial correlation AR$(1)$ or MA$(1)$ in errors or both. Under the null hypothesis, the model can be estimated by the pooled estimation (OLS). Denote the residuals as $\hat{u}_{it}$. The test statistic is

\begin{equation*}  \mr{BL91} = \frac{NT^{2}}{2\left(T-1\right)\left(T-2\right)}\left[A^{2}-4AB+2TB^{2}\right]\xrightarrow {H_{0}^{1,2}}\chi ^{2}\left(2\right) \end{equation*}

where

\begin{equation*}  A=\frac{\sum _{i = 1}^{N}\left(\sum _{t=1}^{T}\hat{u}_{it}\right)^{2}}{\sum _{i = 1}^{N}\sum _{t=1}^{T}\hat{u}_{it}^{2}}-1,\hspace{0.2 in}B=\frac{\sum _{i = 1}^{N}\sum _{t=2}^{T}\hat{u}_{it}\hat{u}_{i,t-1}}{\sum _{i = 1}^{N}\sum _{t=1}^{T}\hat{u}_{it}^{2}} \end{equation*}

Wooldridge Test for the Presence of Unobserved Effects

Wooldridge (2002, sec. 10.4.4) suggests a test for the absence of an unobserved effect. Under the null hypothesis $H_{0}: \sigma _{\gamma }^{2}=0$, the errors $u_{it}$ are serially uncorrelated. To test $H_{0}: \sigma _{\gamma }^{2}=0$, Wooldridge (2002) proposes to test for AR(1) serial correlation. The test statistic that he proposes is

\begin{equation*}  W = \frac{\sum _{i = 1}^{N}\sum _{t = 1}^{T-1}\sum _{s = t + 1}^{T}\hat{u}_{it}\hat{u}_{is}}{\left[\sum _{i = 1}^{N}\left(\sum _{t = 1}^{T-1}\sum _{s = t + 1}^{T}\hat{u}_{it}\hat{u}_{is}\right)^{2}\right]^{1/2}}\rightarrow \mathcal{N}\left(0,1\right) \end{equation*}

where $\hat{u}_{it}$ are the pooled OLS residuals. The test statistic $W$ can detect many types of serial correlation in the error term u, so it has power against both the one-way random-effects specification and the serial correlation in error terms.

Bera, Sosa Escudero, and Yoon Modified Rao’s Score Test in the Presence of Local Misspecification

Bera, Sosa Escudero, and Yoon (2001) point out that the standard specification tests, such as the Honda (1985) test described in the section Honda (1985) and Honda (1991) UMP Test and Moulton and Randolph (1989) SLM Test, are not valid when they test for either cross-sectional random effects or serial correlation without considering the presence of the other effects. They suggest a modified Rao’s score (RS) test. When A and B are defined as in Baltagi and Li (1991) , the test statistic for testing serial correlation under random cross-sectional effects is

\begin{equation*}  \mr{RS}_{\rho }^{*} = \frac{NT^{2}\left(B-A/T\right)^{2}}{\left(T-1\right)\left(1-2/T\right)} \end{equation*}

Baltagi and Li (1991, 1995) derive the conventional RS test when the cross-sectional random effects is assumed to be absent:

\begin{equation*}  \mr{RS}_{\rho } = \frac{NT^{2}B^{2}}{T-1} \end{equation*}

Symmetrically, to test for the cross-sectional random effects in the presence of serial correlation, the modified Rao’s score test statistic is

\begin{equation*}  \mr{RS}_{\mu }^{*} = \frac{NT\left(A-2B\right)^{2}}{2\left(T-1\right)\left(1-2/T\right)} \end{equation*}

and the conventional Rao’s score test statistic is given in Breusch and Pagan (1980). The test statistics are asymptotically distributed as $\chi ^{2}\left(1\right)$.

Because $\sigma _{\gamma }^{2}>0$, the one-sided test is expected to lead to more powerful tests. The one-sided test can be derived by taking the signed square root of the two-sided statistics:

\begin{equation*}  \mr{RSO}_{\mu }^{*} = \sqrt {\frac{NT}{2\left(T-1\right)\left(1-2/T\right)}}\left(A-2B\right)\rightarrow \mathcal{N}\left(0,1\right) \end{equation*}

Baltagi and Li (1995) LM Test for First-Order Correlation under Fixed Effects

Let $\hat{u}_{it}$ be the residual from the fixed one-way model (FIXONE). The two-sided LM test statistic for testing a white noise component in a fixed one-way model ($H_{0}^{5}: \theta =0$ or $H_{0}^{6}: \rho =0$, given that $\gamma _{i}$ are fixed effects) is

\begin{equation*}  \mr{BL95} = \frac{NT^{2}}{T-1}\left(\frac{\sum _{i=1}^{N}\sum _{t=2}^{T}\hat{u}_{it}\hat{u}_{i,t-1}}{\sum _{i=1}^{N}\sum _{t=1}^{T}\hat{u}_{it}^{2}}\right)^{2} \end{equation*}

The LM test statistic is asymptotically distributed as $\chi ^{2}\left(1\right)$ under the null hypothesis. The one-sided LM test with alternative hypothesis $\rho >0$ is

\begin{equation*}  \mr{BL95}_{2} = \sqrt {\frac{NT^{2}}{T-1}}\frac{\sum _{i=1}^{N}\sum _{t=2}^{T}\hat{u}_{it}\hat{u}_{i,t-1}}{\sum _{i=1}^{N}\sum _{t=1}^{T}\hat{u}_{it}^{2}} \end{equation*}

which is asymptotically distributed as standard normal.

Durbin-Watson Statistic

Bhargava, Franzini, and Narendranathan (1982) propose a test of the null hypothesis of no serial correlation $H_{0}^{6}: \rho =0$ against the alternative $H_{1}^{6}: 0<|\rho |<1$ by the Durbin-Watson statistic based on residuals $\hat{u}_{it}$ from the fixed one-way model (FIXONE):

\begin{equation*}  d_{\rho } = \frac{\sum _{i=1}^{N}\sum _{t=2}^{T}\left(\hat{u}_{it}-\hat{u}_{i,t-1}\right)^{2}}{\sum _{i=1}^{N}\sum _{t=1}^{T}\hat{u}_{it}^{2}} \end{equation*}

The test statistic $d_{\rho }$ is a locally most powerful invariant test in the neighborhood of $\rho =0$. Some of the upper and lower bounds are listed in Bhargava, Franzini, and Narendranathan (1982). For very large N, to test against a positive correlation $\rho >0$, you can simply test whether the test statistic d$_{\rho }<2$.

Berenblut-Webb Statistic

Let $\Delta \tilde{u}_{it}$ be the residuals from the first-difference estimation. Bhargava, Franzini, and Narendranathan (1982) suggest using the Berenblut-Webb statistic, which is a locally most powerful invariant test in the neighborhood of $\rho =1$. The test statistic is

\begin{equation*}  \mr{g}_{\rho } = \frac{\sum _{i=1}^{N}\sum _{t=2}^{T}\Delta \tilde{u}_{i,t}^{2}}{\sum _{i=1}^{N}\sum _{t=1}^{T}\hat{u}_{it}^{2}} \end{equation*}

The upper and lower bounds are the same as for the Durbin-Watson statistic $d_{\rho }$.

Testing for Random Walk Null Hypothesis

You can also use the Durbin-Watson and Berenblut-Webb statistics to test the random walk null hypothesis, with the bounds that are listed in Bhargava, Franzini, and Narendranathan (1982). For more information about these statistics, see the sections Durbin-Watson Statistic and Berenblut-Webb Statistic. Bhargava, Franzini, and Narendranathan (1982) also propose the R$_{\rho }$ statistic to test the random walk null hypothesis $\rho =1$ against the stationary alternative $|\rho |<1$. Let $\mb{F}^{*}=I_{N}\otimes \mb{F}$, where $\mb{F}$ is a $\left(T-1\right)\left(T-1\right)$ symmetric matrix that has the following elements:

\begin{equation*}  \mb{F}_{tt'} = \left(T-t’\right)t/T \hspace{0.2 in} if t’\geq t \hspace{0.2 in} \left( t,t’=1,\ldots ,T-1\right) \end{equation*}

The test statistic is

\begin{equation*} \begin{array}{l l l} \mr{R}_{\rho } &  = &  \Delta \tilde{U}’\Delta \tilde{U}/\Delta \tilde{U}’F^{*}\Delta \tilde{U}\\ &  = &  \frac{\sum _{i=1}^{N}\sum _{t=2}^{T}\Delta \tilde{u}_{i,t}^{2}}{\left[\sum _{i=1}^{N}\sum _{t=2}^{T}\left(t-1\right) \left(T-t+1\right)\Delta \tilde{u}_{i,t}^{2}+2\sum _{i=1}^{N}\sum _{t=2}^{T-1}\sum _{t'=t+1}^{T}\left(T-t'+1\right)\left(t-1\right)\Delta \tilde{u}_{i,t}\Delta \tilde{u}_{i,t'}\right]/T} \end{array}\end{equation*}

The statistics R$_{\rho }$, g$_{\rho }$, and d$_{\rho }$ can be used with the same bounds. They satisfy $\mr{R}_{\rho }\leq \mr{g}_{\rho } \leq \mr{d}_{\rho }$, and they are equivalent for large panels.