The KRIGE2D Procedure

Ordinary Kriging

Denote the SRF by $Z(\bm {s}), \bm {s} \in D \subset \mathcal{R}^2 $. Following the notation in Cressie (1993), the following model for $Z(\bm {s})$ is assumed:

\[  Z(\bm {s}) = \mu + \varepsilon (\bm {s})  \]

Here, $\mu $ is the fixed, unknown mean of the process, and $\varepsilon (\bm {s})$ is a zero mean SRF, which represents the variation around the mean.

In most practical applications, an additional assumption is required in order to estimate the covariance $C_ z$ of the $Z(\bm {s})$ process. This assumption is second-order stationarity:

\[  C_ z(\bm {s}_1,\bm {s}_2) = \mr {E}[\varepsilon (\bm {s}_1)\varepsilon (\bm {s}_2)] = C_ z(\bm {s}_1-\bm {s}_2) = C_ z(\bm {h})  \]

This requirement can be relaxed slightly when you are using the semivariogram instead of the covariance. In this case, second-order stationarity is required of the differences $\varepsilon (\bm {s}_1)-\varepsilon (\bm {s}_2)$ rather than $\varepsilon (\bm {s})$:

\[  \gamma _ z(\bm {s}_1,\bm {s}_2) = \frac{1}{2}\mr {E}[(\varepsilon (\bm {s}_1)-\varepsilon (\bm {s}_2))^2] = \gamma _ z(\bm {s}_1-\bm {s}_2) = \gamma _ z(\bm {h})  \]

By performing local kriging, the spatial processes represented by the previous equation for $Z(\bm {s})$ are more general than they appear. In local kriging, at an unsampled location $\bm {s}_0$, a separate model is fit using only data in a neighborhood of $\bm {s}_0$. This has the effect of fitting a separate mean $\mu $ at each point, and it is similar to the kriging with trend (KT) method discussed in Journel and Rossi (1989).

Given the N measurements $Z(\bm {s}_1), \dots , Z(\bm {s}_ N)$ at known locations $\bm {s}_1, \dots ,\bm {s}_ N$, you want to obtain a prediction of Z at an unsampled location $\bm {s}_0$. When the following three requirements are imposed on the predictor $\hat{Z}$, the OK predictor is obtained:

  1. $\hat{Z}$ is linear in $Z(\bm {s}_1), \cdots , Z(\bm {s}_ N)$

  2. $\hat{Z}$ is unbiased

  3. $\hat{Z}$ minimizes the mean square prediction error $\mr {E}[(Z(\bm {s}_0)-\hat{Z}(\bm {s}_0))^2]$

Linearity requires the following form for $\hat{Z}(\bm {s}_0)$:

\[  \hat{Z}(\bm {s}_0) = \sum _{i=1}^{N}\lambda _ iZ(\bm {s}_ i)  \]

Applying the unbiasedness condition to the preceding equation yields

$\displaystyle  \mr {E}[\hat{Z}(\bm {s}_0)] = \mu \Rightarrow \sum _{i=1}^{N}\lambda _ i \mr {E}[Z(\bm {s}_ i)] = \mu \Rightarrow \sum _{i=1}^{N}\lambda _ i\mu = \mu \Rightarrow \sum _{i=1}^{N}\lambda _ i = 1  $

Finally, the third condition requires a constrained linear optimization that involves $\lambda _1, \cdots , \lambda _ N$ and a Lagrange parameter $2m$. This constrained linear optimization can be expressed in terms of the function $L(\lambda _1, \cdots , \lambda _ N,m)$ given by

\[  L = \mr {E} \left[ \left( Z(\bm {s}_0) - \sum _{i=1}^{N}\lambda _ iZ(\bm {s}_ i) \right)^2 \right] - 2m\left(\sum _{i=1}^{N}\lambda _ i-1 \right)  \]

Define the $N \times 1$ column vector $\blambda $ by

\[  \blambda = \mbox{\unboldmath $ (\lambda _1,\cdots ,\lambda _ N)$}  \]

and the $(N+1) \times 1$ column vector $\blambda _\mb {0}$ by

\[  \blambda _\mb {0} = \mbox{\unboldmath $ (\lambda _1,\cdots ,\lambda _ N,m)$} = \left(\begin{array}{c} \blambda \\ m \\ \end{array} \right)  \]

The optimization is performed by solving

\[  \frac{\partial L}{{\partial \blambda _\mb {0}}}= \mb {0}  \]

in terms of $\lambda _1,\cdots ,\lambda _ N$ and m.

The resulting matrix equation can be expressed in terms of either the covariance $C_ z(\bm {h})$ or semivariogram $\gamma _ z(\bm {h})$. In terms of the covariance, the preceding equation results in the matrix equation

\[  \mb {C} \blambda _\mb {0} = \mb {C_0}  \]

where

\[  \mb {C} = \left( \begin{array}{ccccc} C_ z(\bm {0}) &  C_ z(\bm {s}_1-\bm {s}_2) &  \cdots &  C_ z(\bm {s}_1-\bm {s}_ N) &  1 \\ C_ z(\bm {s}_2-\bm {s}_1) &  C_ z(\bm {0}) &  \cdots &  C_ z(\bm {s}_2-\bm {s}_ N) &  1 \\ & &  \ddots & & \\ C_ z(\bm {s}_ N-\bm {s}_1) &  C_ z(\bm {s}_ N-\bm {s}_2) &  \cdots &  C_ z(\bm {0}) &  1 \\ 1 &  1 &  \cdots &  1 &  0 \\ \end{array} \right)  \]

and

\[  \mb {C_0} = \left( \begin{array}{c} C_ z(\bm {s}_0-\bm {s}_1) \\ C_ z(\bm {s}_0-\bm {s}_2) \\ \vdots \\ C_ z(\bm {s}_0-\bm {s}_ N) \\ 1 \\ \end{array} \right)  \]

The solution to the previous matrix equation is

\[  \hat{\blambda }_\mb {0} = \mb {C^{-1}C_0}  \]

Using this solution for $\blambda $ and m, the ordinary kriging prediction at $r_0$ is

\[  \hat{Z}(\bm {s}_0) = \lambda _1 Z(\bm {s}_1) + \cdots +\lambda _ N Z(\bm {s}_ N)  \]

with associated prediction error the square root of the variance

\[  {\sigma _ z}^2(\bm {s}_0) = C_ z(\bm {0}) - {\blambda ^{\prime }} \mb {c_0} + m  \]

where $\mathbf{c_0}$ is $\mathbf{C_0}$ with the 1 in the last row removed, making it an $N \times 1$ vector.

These formulas are used in the best linear unbiased prediction (BLUP) of random variables (Robinson 1991). Further details are provided in Cressie (1993, pp. 119–123).

Because of possible numeric problems when solving the previous matrix equation, Deutsch and Journel (1992) suggest replacing the last row and column of 1s in the preceding matrix $\mathbf{C}$ by $C_ z(0)$, keeping the 0 in the $(N+1,N+1)$ position and similarly replacing the last element in the preceding right-hand vector $\mathbf{C_0}$ with $C_ z(0)$. This results in an equivalent system but avoids numeric problems when $C_ z(0)$ is large or small relative to 1.