Language Reference


KALDFS Call

CALL KALDFS (sm, vsm, data, int, coef, var, bvec, bmat, initial, at, mt, s2 <, un, vun> );

The KALDFS subroutine computes the smoothed state vector and its mean square error matrix from the one-step forecast and mean square error matrix computed by KALDFF subroutine .

The input arguments to the KALDFS subroutine are as follows:

data

is a $T \times N_ y$ matrix that contains data $(y_1, \ldots , y_ T)^{\prime }$.

int

is an $(N_ z + N_ y) \times N_\beta $ vector for a time-invariant intercept, or a $(T + \mbox{lead})(N_ z + N_ y) \times N_{\beta }$ vector that contains fixed matrices for the time-variant model in the transition equation and the measurement equation—that is, $(W^{\prime }_ t, X^{\prime }_ t)^{\prime }$.

coef

is an $(N_ z + N_ y) \times N_ z$ matrix for a time-invariant coefficient, or a $(T + \mbox{lead})(N_ z + N_ y) \times N_ z$ matrix that contains coefficients at each time in the transition equation and the measurement equation—that is, $(F^{\prime }_ t, H^{\prime }_ t)^{\prime }$.

var

is an $(N_ z + N_ y) \times (N_ z + N_ y)$ matrix for a time-invariant variance matrix for transition equation noise and the measurement equation noise, or a $(T + \mbox{lead})(N_ z + N_ y) \times (N_ z + N_ y)$ matrix that contains covariance matrices for the transition equation and measurement equation errors—that is, $(\eta ^{\prime }_ t, \epsilon ^{\prime }_ t)^{\prime }$.

bvec

is an $N_\beta \times 1$ constant vector for the intercept for the mean effect $\beta $.

bmat

is an $N_\beta \times N_\delta $ matrix for the coefficient for the mean effect $\beta $.

initial

is an $N_\delta \times (N_\delta + 1)$ matrix that contains an initial random vector estimate and its covariance matrix—that is, $(\hat{\delta }_ T, \hat{\Sigma }_{\delta ,T})$.

at

is a $TN_ z \times (N_\delta + 1)$ matrix that contains $(A^{\prime }_1, \ldots , A^{\prime }_ T)^{\prime }$.

mt

is a $(TN_ z) \times N_ z$ matrix that contains $(M_1, \ldots , M_ T)^{\prime }$.

s2

is the estimated variance in the end of the data set, $\hat{\sigma }^2_ T$.

un

is an optional $N_ z \times (N_{\delta } + 1)$ matrix that contains $u_ T$. The returned value is $u_0$.

vun

is an optional $N_ z \times N_ z$ matrix that contains $U_ T$. The returned value is $U_0$.

The KALDFS call returns the following values:

sm

is a $T \times N_ z$ matrix that contains smoothed state vectors $(z_{1|T}, \ldots , z_{T|T})^{\prime }$.

vsm

is a $TN_ z \times N_ z$ matrix that contains mean square error matrices of smoothed state vectors $(P_{1|T}, \ldots , P_{T|T})^{\prime }$.

Given the one-step forecast and mean square error matrix in the KALDFF call , the KALDFS call computes a smoothed state vector and its mean square error matrix. Then the KALDFS subroutine produces an estimate of the smoothed state vector at time t—that is, the conditional expectation of the state vector $z_ t$ given all observations. Using the notations and results from the KALDFF subroutine section, the backward recursion algorithm for smoothing is denoted for $t = T, T-1, \ldots , 1,$

\begin{eqnarray*}  E_ t &  = &  (X_ t B, ~  y_ t - X_ t b) - H_ t A_ t \\[0.05in] D_ t &  = &  H_ tM_ tH^{\prime }_ t + R_ t \\[0.05in] L_ t &  = &  F_ t - (F_ t M_ t H^{\prime }_ t + G_ t) D_ t^- H_ t \\[0.05in] u_{t-1} &  = &  H^{\prime }_ t D_ t^- E_ t + L^{\prime }_ t u_ t \\[0.05in] U_{t-1} &  = &  H^{\prime }_ t D_ t^- H_ t + L^{\prime }_ t U_ t L_ t \\[0.05in] z_{t|T} &  = &  (A_ t + M_ t u_{t-1}) (-\hat{\delta }^{\prime }_ T, 1)^{\prime } \\[0.05in] C_ t &  = &  A_ t + M_ t u_{t-1} \\[0.05in] P_{t|T} &  = &  \hat{\sigma }^2_ T(M_ t - M_ t R_{t-1} M_ t) + C_{t(\delta )} \hat{\Sigma }_{\delta ,T} C^{\prime }_{t(\delta )} \end{eqnarray*}

where the initial values are $u_ T = b{0}$ and $U_ T = \mb{0}$, and $C_{t(\delta )}$ is the last-column-deleted submatrix of $C_ t$. See De Jong (1991) for details about smoothing in the diffuse Kalman filter.

The KALDFS call is accompanied by the KALDFF call as shown in the following statements:

ny = ncol(y);
nz = ncol(coef);
nb = ncol(int);
nd = ncol(coefd);
at = j(nz, nd+1, .);
mt = j(nz, nz, .);
qt = j(nd+1, nd+1, .);
n0 = -1;
call kaldff(pred, vpred, initial, s2, y, 0, int, coef, var, intd,
            coefd, n0, at, mt, qt);
bvec = intd[nz+1:nz+nb,];
bmat = coefd[nz+1:nz+nb,];
call kaldfs(sm, vsm, x, int, coef, var, bvec, bmat,
            initial, at, mt, s2);

You can also compute the smoothed estimate and its covariance matrix observation by observation. When the SSM is time invariant, the following statements perform smoothing. You should initialize UN and VUN as matrices in which all elements are zero.

n  = nrow(y);
ny = ncol(y);
nz = ncol(coef);
nb = ncol(int);
nd = ncol(coefd);
at = j(nz, nd+1, .);
mt = j(nz, nz, .);
qt = j(nd+1, nd+1, .);
n0 = -1;
call kaldff(pred, vpred, initial, s2, y, 0, int, coef, var, intd,
            coefd, n0, at, mt, qt);
bvec = intd[nz+1:nz+nb,];
bmat = coefd[nz+1:nz+nb,];
un  = j(nz, nd+1, 0);
vun = j(nz, nz, 0);
do i = 1 to n;
   call kaldfs(sm_i, vsm_i, y[n-i+1], int, coef, var, bvec, bmat,
               initial, at, mt, s2, un, vun);
   sm  = sm_i // sm;
   vsm = vsm_i // vsm;
end;