Forecasting Process Details


Equations for the Smoothing Models

Simple Exponential Smoothing

The model equation for simple exponential smoothing is

\[  Y_{t} = {\mu }_{t} + {\epsilon }_{t}  \]

The smoothing equation is

\[  L_{t} = {\alpha }Y_{t} + (1-{\alpha })L_{t-1}  \]

The error-correction form of the smoothing equation is

\[  L_{t} = L_{t-1} + {\alpha }e_{t}  \]

(Note: For missing values, ${e_{t}=0}$.)

The k-step prediction equation is

\[  \hat{Y}_{t}(k) = L_{t}  \]

The ARIMA model equivalency to simple exponential smoothing is the ARIMA(0,1,1) model

\begin{gather*}  (1-{B})Y_{t} = (1-{\theta }{B}){\epsilon }_{t} \\ {\theta } = 1 - {\alpha } \end{gather*}

The moving-average form of the equation is

\[  Y_{t} = {\epsilon }_{t} + \sum _{j=1}^{{\infty }}{{\alpha }{\epsilon }_{t-j}}  \]

For simple exponential smoothing, the additive-invertible region is

\[  \{  0 < {\alpha } < 2\}   \]

The variance of the prediction errors is estimated as

\[  {var}(e_{t}(k)) = {var}({\epsilon }_{t}) \left[{1 + \sum _{j=1}^{k-1}{{\alpha }^{2}}}\right] = {var}({\epsilon }_{t})( 1 + (k-1){\alpha }^{2})  \]

Double (Brown) Exponential Smoothing

The model equation for double exponential smoothing is

\[  Y_{t} = {\mu }_{t} + {\beta }_{t}t + {\epsilon }_{t}  \]

The smoothing equations are

\begin{gather*}  L_{t} = {\alpha }Y_{t} + (1-{\alpha })L_{t-1} \\ T_{t} = {\alpha }(L_{t} - L_{t-1}) + (1-{\alpha })T_{t-1} \end{gather*}

This method can be equivalently described in terms of two successive applications of simple exponential smoothing:

\begin{gather*}  \mi{S} _{t}^{{[1]} } = {\alpha }Y_{t} + (1-{\alpha }) \mi{S} _{t-1}^{{[1]} } \\ \mi{S} _{t}^{{[2]} } = {\alpha } \mi{S} _{t}^{{[1]} } + (1-{\alpha }) \mi{S} _{t-1}^{{[2]} } \end{gather*}

where ${ \mi{S} _{t}^{{[1]} } }$ are the smoothed values of ${Y_{t}}$, and ${ \mi{S} _{t}^{{[2]} } }$ are the smoothed values of ${ \mi{S} _{t}^{{[1]} } }$. The prediction equation then takes the form:

\[  \hat{Y}_{t}(k) = (2+{\alpha }k/(1-{\alpha })) \mi{S} _{t}^{{[1]} } - (1+{\alpha }k/(1-{\alpha })) \mi{S} _{t}^{{[2]} }  \]

The error-correction forms of the smoothing equations are

\begin{gather*}  L_{t} = L_{t-1} + T_{t-1} + {\alpha }e_{t} \\ T_{t} = T_{t-1} + {\alpha }^{2}e_{t} \end{gather*}

(Note: For missing values, ${e_{t}=0}$.)

The k-step prediction equation is

\[  \hat{Y}_{t}(k) = L_{t} + ( (k-1) + 1/{\alpha } )T_{t}  \]

The ARIMA model equivalency to double exponential smoothing is the ARIMA(0,2,2) model,

\begin{gather*}  (1-{B})^{2}Y_{t} = (1-{\theta }{B})^{2}{\epsilon }_{t} \\ {\theta } = 1 - {\alpha } \end{gather*}

The moving-average form of the equation is

\[  Y_{t} = {\epsilon }_{t} + \sum _{j=1}^{{\infty }}{(2{\alpha } + (j-1){\alpha }^{2}) {\epsilon }_{t-j}}  \]

For double exponential smoothing, the additive-invertible region is

\[  \{  0 < {\alpha } < 2\}   \]

The variance of the prediction errors is estimated as

\[  {var}(e_{t}(k)) = {var}({\epsilon }_{t}) \left[{1 + \sum _{j=1}^{k-1}{(2{\alpha } + (j-1){\alpha }^{2})^{2}}}\right]  \]

Linear (Holt) Exponential Smoothing

The model equation for linear exponential smoothing is

\[  Y_{t} = {\mu }_{t} + {\beta }_{t}t + {\epsilon }_{t}  \]

The smoothing equations are

\begin{gather*}  L_{t} = {\alpha }Y_{t} + (1-{\alpha })(L_{t-1} + T_{t-1}) \\ T_{t} = {\gamma }(L_{t} - L_{t-1}) + (1-{\gamma })T_{t-1} \end{gather*}

The error-correction form of the smoothing equations is

\begin{gather*}  L_{t} = L_{t-1} + T_{t-1} + {\alpha }e_{t} \\ T_{t} = T_{t-1} + {\alpha }{\gamma }e_{t} \end{gather*}

(Note: For missing values, ${e_{t}=0}$.)

The k-step prediction equation is

\[  \hat{Y}_{t}(k) = L_{t} + kT_{t}  \]

The ARIMA model equivalency to linear exponential smoothing is the ARIMA(0,2,2) model,

\begin{gather*}  (1-{B})^{2}Y_{t} = (1-{\theta }_{1}{B}-{\theta }_{2}{B}^{2}) {\epsilon }_{t} \\ {\theta }_{1} = 2 - {\alpha } - {\alpha }{\gamma } \\ {\theta }_{2} = {\alpha } - 1 \end{gather*}

The moving-average form of the equation is

\[  Y_{t} = {\epsilon }_{t} + \sum _{j=1}^{{\infty }}{({\alpha } + j{\alpha }{\gamma }) {\epsilon }_{t-j}}  \]

For linear exponential smoothing, the additive-invertible region is

\begin{gather*}  \{  0 < {\alpha } < 2\}  \\ \{  0 < {\gamma } < 4/{\alpha } - 2\}  \end{gather*}

The variance of the prediction errors is estimated as

\[  {var}(e_{t}(k)) = {var}({\epsilon }_{t}) \left[{1 + \sum _{j=1}^{k-1}{({\alpha } + j{\alpha }{\gamma })^{2}}}\right]  \]

Damped-Trend Linear Exponential Smoothing

The model equation for damped-trend linear exponential smoothing is

\[  Y_{t} = {\mu }_{t} + {\beta }_{t}t + {\epsilon }_{t}  \]

The smoothing equations are

\begin{gather*}  L_{t} = {\alpha }Y_{t} + (1-{\alpha })(L_{t-1} + {\phi }T_{t-1}) \\ T_{t} = {\gamma }(L_{t} - L_{t-1}) + (1-{\gamma }){\phi }T_{t-1} \end{gather*}

The error-correction form of the smoothing equations is

\[  L_{t} = L_{t-1} + {\phi }T_{t-1} + {\alpha }e_{t}  T_{t} = {\phi }T_{t-1} + {\alpha }{\gamma }e_{t}  \]

(Note: For missing values, ${e_{t}=0}$.)

The k-step prediction equation is

\[  \hat{Y}_{t}(k) = L_{t} + \sum _{i=1}^{k}{{\phi }^{i}T_{t} }  \]

The ARIMA model equivalency to damped-trend linear exponential smoothing is the ARIMA(1,1,2) model,

\begin{gather*}  (1-{\phi }{B})(1-{B})Y_{t} = (1-{\theta }_{1}{B}-{\theta }_{2}{B}^{2}) {\epsilon }_{t} \\ {\theta }_{1} = 1 + {\phi } - {\alpha } - {\alpha }{\gamma }{\phi } \\ {\theta }_{2} = ({\alpha } - 1){\phi } \end{gather*}

The moving-average form of the equation (assuming ${|{\phi }| < 1 }$) is

\[  Y_{t} = {\epsilon }_{t} + \sum _{j=1}^{{\infty }}{({\alpha } + {\alpha }{\gamma } {\phi }({\phi }^{j} - 1)/({\phi } - 1)) {\epsilon }_{t-j}}  \]

For damped-trend linear exponential smoothing, the additive-invertible region is

\begin{gather*}  \{  0 < {\alpha } < 2\}  \\ \{  0 < {\phi }{\gamma } < 4/{\alpha } - 2\}  \end{gather*}

The variance of the prediction errors is estimated as

\[  {var}(e_{t}(k)) = {var}({\epsilon }_{t}) \left[ 1 + \sum _{j=1}^{k-1}{({\alpha } + {\alpha }{\gamma } {\phi }({\phi }^{j} - 1)/({\phi } - 1) )^{2}}\right]  \]

Seasonal Exponential Smoothing

The model equation for seasonal exponential smoothing is

\[  Y_{t} = {\mu }_{t} + s_{p}(t) + {\epsilon }_{t}  \]

The smoothing equations are

\begin{gather*}  L_{t} = {\alpha }(Y_{t}-S_{t-p}) + (1-{\alpha })L_{t-1} \\ S_{t} = {\delta }(Y_{t}-L_{t}) + (1-{\delta })S_{t-p} \end{gather*}

The error-correction form of the smoothing equations is

\begin{gather*}  L_{t} = L_{t-1} + {\alpha }e_{t} \\ S_{t} = S_{t-p} + {\delta }(1-{\alpha })e_{t} \end{gather*}

(Note: For missing values, ${e_{t}=0}$.)

The k-step prediction equation is

\[  \hat{Y}_{t}(k) = L_{t} + S_{t-p+k}  \]

The ARIMA model equivalency to seasonal exponential smoothing is the ARIMA(0,1,p+1)(0,1,0)$_{p}$ model,

\begin{gather*}  (1-{B})(1-{B}^{p})Y_{t} = (1 - {\theta }_{1}{B} - {\theta }_{2}{B}^{p}- {\theta }_{3}{B}^{p+1}) {\epsilon }_{t} \\ {\theta }_{1} = 1 - {\alpha } \\ {\theta }_{2} = 1 - {\delta }(1-{\alpha }) \\ {\theta }_{3} = (1 - {\alpha })({\delta } - 1) \end{gather*}

The moving-average form of the equation is

\begin{gather*}  Y_{t} = {\epsilon }_{t} + \sum _{j=1}^{{\infty }}{{\psi }_{j}{\epsilon }_{t-j}} \\ {\psi }_{j} = \begin{cases}  {\alpha } &  \mr{for} j \mr{mod} p {\ne } 0 \\ {\alpha }+{\delta }(1-{\alpha }) &  \mr{for} j \mod {p} = 0\end{cases}\end{gather*}

For seasonal exponential smoothing, the additive-invertible region is

\[  \{  \mr{max} (-p{\alpha },0) < {\delta }(1-{\alpha }) < (2-{\alpha }) \}   \]

The variance of the prediction errors is estimated as

\[  {var}(e_{t}(k)) = {var}({\epsilon }_{t}) \left[{1 + \sum _{j=1}^{k-1}{{\psi }_{j}^{2}}}\right]  \]

Multiplicative Seasonal Smoothing

In order to use the multiplicative version of seasonal smoothing, the time series and all predictions must be strictly positive.

The model equation for the multiplicative version of seasonal smoothing is

\[  Y_{t} = {\mu }_{t} s_{p}(t) + {\epsilon }_{t}  \]

The smoothing equations are

\begin{gather*}  L_{t} = {\alpha }(Y_{t}/S_{t-p}) + (1-{\alpha })L_{t-1} \\ S_{t} = {\delta }(Y_{t}/L_{t}) + (1-{\delta })S_{t-p} \end{gather*}

The error-correction form of the smoothing equations is

\begin{gather*}  L_{t} = L_{t-1} + {\alpha }e_{t}/S_{t-p} \\ S_{t} = S_{t-p} + {\delta }(1-{\alpha })e_{t}/L_{t} \end{gather*}

(Note: For missing values, ${e_{t}=0}$.)

The k-step prediction equation is

\[  \hat{Y}_{t}(k) = L_{t} S_{t-p+k}  \]

The multiplicative version of seasonal smoothing does not have an ARIMA equivalent; however, when the seasonal variation is small, the ARIMA additive-invertible region of the additive version of seasonal described in the preceding section can approximate the stability region of the multiplicative version.

The variance of the prediction errors is estimated as

\[  {var}(e_{t}(k)) = {var}({\epsilon }_{t}) \left[{\sum _{i=0}^{{\infty }}{\sum _{j=0}^{p-1}{({\psi }_{j+ip}S_{t+k}/S_{t+k-j})^{2} }}}\right]  \]

where ${{\psi }_{j}}$ are as described for the additive version of seasonal method, and ${{\psi }_{j} = 0}$ for ${j \ge k}$.

Winters Method—Additive Version

The model equation for the additive version of Winters method is

\[  Y_{t} = {\mu }_{t} + {\beta }_{t}t + s_{p}(t) + {\epsilon }_{t}  \]

The smoothing equations are

\begin{gather*}  L_{t} = {\alpha }(Y_{t}-S_{t-p}) + (1-{\alpha })(L_{t-1}+T_{t-1}) \\ T_{t} = {\gamma }(L_{t} - L_{t-1}) + (1-{\gamma })T_{t-1} \\ S_{t} = {\delta }(Y_{t}-L_{t}) + (1-{\delta })S_{t-p} \end{gather*}

The error-correction form of the smoothing equations is

\begin{gather*}  L_{t} = L_{t-1} + T_{t-1} + {\alpha }e_{t} \\ T_{t} = T_{t-1} + {\alpha }{\gamma }e_{t} \\ S_{t} = S_{t-p} + {\delta }(1-{\alpha })e_{t} \end{gather*}

(Note: For missing values, ${e_{t}=0}$.)

The k-step prediction equation is

\[  \hat{Y}_{t}(k) = L_{t} + kT_{t} + S_{t-p+k}  \]

The ARIMA model equivalency to the additive version of Winters method is the ARIMA(0,1,p+1)(0,1,0)$_{p}$ model,

\begin{gather*}  (1-{B})(1-{B}^{p})Y_{t} = \left[{1 - { \sum _{i=1}^{p+1}{{\theta }_{i}{B}^{i}}}}\right] {\epsilon }_{t} \\ {\theta }_{j} = \begin{cases}  1 - \alpha - {\alpha }{\gamma } &  j = 1 \\ -{\alpha }{\gamma } &  2 \le j \le p-1 \\ 1 - {\alpha }{\gamma } - {\delta }(1-{\alpha }) &  j = p \\ (1 - {\alpha })({\delta } - 1) &  j = p + 1 \end{cases}\end{gather*}

The moving-average form of the equation is

\begin{gather*}  Y_{t} = {\epsilon }_{t} + \sum _{j=1}^{{\infty }}{{\psi }_{j}{\epsilon }_{t-j}} \\ {\psi }_{j} = \begin{cases}  {\alpha }+j{\alpha }{\gamma } &  \mr{for} j \mod {p} \ne 0 \\ {\alpha }+j{\alpha }{\gamma }+{\delta }(1-{\alpha }), &  \mr{for} j \mod {p} = 0 \end{cases}\end{gather*}

For the additive version of Winters method (Archibald, 1990), the additive-invertible region is

\begin{gather*}  \{  \mr{max}(-p{\alpha },0) < {\delta }(1-{\alpha }) < (2-{\alpha }) \}  \\ \{  0 < {\alpha }{\gamma } < {2-{\alpha } - {\delta }(1-{\alpha })}(1-{cos}({\vartheta })\}  \end{gather*}

where $\vartheta $ is the smallest nonnegative solution to the equations listed in Archibald (1990).

The variance of the prediction errors is estimated as

\[  {var}(e_{t}(k)) = {var}({\epsilon }_{t}) \left[{1 + \sum _{j=1}^{k-1}{{\psi }_{j}^{2}}}\right]  \]

Winters Method—Multiplicative Version

In order to use the multiplicative version of Winters method, the time series and all predictions must be strictly positive.

The model equation for the multiplicative version of Winters method is

\[  Y_{t} = ({\mu }_{t} + {\beta }_{t}t) s_{p}(t) + {\epsilon }_{t}  \]

The smoothing equations are

\begin{gather*}  L_{t} = {\alpha }(Y_{t}/S_{t-p}) + (1-{\alpha })(L_{t-1}+T_{t-1}) \\ T_{t} = {\gamma }(L_{t} - L_{t-1}) + (1-{\gamma })T_{t-1} \\ S_{t} = {\delta }(Y_{t}/L_{t}) + (1-{\delta })S_{t-p} \end{gather*}

The error-correction form of the smoothing equations is

\begin{gather*}  L_{t} = L_{t-1} + T_{t-1} + {\alpha }e_{t}/S_{t-p} \\ T_{t} = T_{t-1} + {\alpha }{\gamma }e_{t}/S_{t-p} \\ S_{t} = S_{t-p} + {\delta }(1-{\alpha })e_{t}/L_{t} \end{gather*}

Note: For missing values, ${e_{t}=0}$.

The k-step prediction equation is

\[  \hat{Y}_{t}(k) = (L_{t} + kT_{t})S_{t-p+k}  \]

The multiplicative version of Winters method does not have an ARIMA equivalent; however, when the seasonal variation is small, the ARIMA additive-invertible region of the additive version of Winters method described in the preceding section can approximate the stability region of the multiplicative version.

The variance of the prediction errors is estimated as

\[  {var}(e_{t}(k)) = {var}({\epsilon }_{t}) \left[{\sum _{i=0}^{{\infty }}{\sum _{j=0}^{p-1}{({\psi }_{j+ip}S_{t+k}/S_{t+k-j})^{2} }}}\right]  \]

where ${{\psi }_{j}}$ are as described for the additive version of Winters method and ${{\psi }_{j} = 0}$ for ${j \ge k}$.