Denote the SRF by 
. Following the notation in Cressie (1993), the following model for 
 is assumed: 
            
![\[ Z(\bm {s}) = \mu + \varepsilon (\bm {s}) \]](images/statug_krige2d0160.png)
Here, 
 is the fixed, unknown mean of the process, and 
 is a zero mean SRF, which represents the variation around the mean. 
            
               
               In most practical applications, an additional assumption is required in order to estimate the covariance 
 of the 
 process. This assumption is second-order stationarity: 
            
![\[ C_ z(\bm {s}_1,\bm {s}_2) = \mr{E}[\varepsilon (\bm {s}_1)\varepsilon (\bm {s}_2)] = C_ z(\bm {s}_1-\bm {s}_2) = C_ z(\bm {h}) \]](images/statug_krige2d0164.png)
This requirement can be relaxed slightly when you are using the semivariogram instead of the covariance. In this case, second-order
               stationarity is required of the differences 
 rather than 
: 
            
![\[ \gamma _ z(\bm {s}_1,\bm {s}_2) = \frac{1}{2}\mr{E}[(\varepsilon (\bm {s}_1)-\varepsilon (\bm {s}_2))^2] = \gamma _ z(\bm {s}_1-\bm {s}_2) = \gamma _ z(\bm {h}) \]](images/statug_krige2d0166.png)
               
               By performing local kriging, the spatial processes represented by the previous equation for 
 are more general than they appear. In local kriging, at an unsampled location 
, a separate model is fit using only data in a neighborhood of 
. This has the effect of fitting a separate mean 
 at each point, and it is similar to 
               
               the kriging with trend (KT) method discussed in Journel and Rossi (1989). 
            
               
               Given the N measurements 
 at known locations 
, you want to obtain a prediction of Z at an unsampled location 
. When the following three requirements are imposed on the predictor 
, the OK predictor is obtained: 
            
 is linear in 
 
                     
 is unbiased 
                     
 minimizes the mean square prediction error 
 
                     
Linearity requires the following form for 
: 
            
![\[ \hat{Z}(\bm {s}_0) = \sum _{i=1}^{N}\lambda _ iZ(\bm {s}_ i) \]](images/statug_krige2d0174.png)
Applying the unbiasedness condition to the preceding equation yields
![\begin{align*} \mr{E}[\hat{Z}(\bm {s}_0)] = \mu \Rightarrow \sum _{i=1}^{N}\lambda _ i \mr{E}[Z(\bm {s}_ i)] = \mu \Rightarrow \sum _{i=1}^{N}\lambda _ i\mu = \mu \Rightarrow \sum _{i=1}^{N}\lambda _ i = 1 \end{align*}](images/statug_krige2d0175.png)
               
               Finally, the third condition requires a constrained linear optimization that involves 
 and a Lagrange parameter 
. This constrained linear optimization can be expressed in terms of the function 
 given by 
            
![\[ L = \mr{E} \left[ \left( Z(\bm {s}_0) - \sum _{i=1}^{N}\lambda _ iZ(\bm {s}_ i) \right)^2 \right] - 2m\left(\sum _{i=1}^{N}\lambda _ i-1 \right) \]](images/statug_krige2d0179.png)
Define the 
 column vector 
 by 
            
![\[ \blambda = (\lambda _1,\ldots ,\lambda _ N)’ \]](images/statug_krige2d0182.png)
and the 
 column vector 
 by 
            
![\[ \blambda _\mb {0} = (\lambda _1,\ldots ,\lambda _ N,m)’ = \left(\begin{array}{c} \blambda \\ m \\ \end{array} \right) \]](images/statug_krige2d0185.png)
The optimization is performed by solving
![\[ \frac{\partial L}{{\partial \blambda _\mb {0}}}= \mb{0} \]](images/statug_krige2d0186.png)
 in terms of 
 and m. 
            
               
               The resulting matrix equation can be expressed in terms of either the covariance 
 or semivariogram 
. In terms of the covariance, the preceding equation results in the matrix equation 
            
![\[ \mb{C} \blambda _\mb {0} = \mb{C_0} \]](images/statug_krige2d0190.png)
where
![\[ \mb{C} = \left( \begin{array}{ccccc} C_ z(\bm {0}) & C_ z(\bm {s}_1-\bm {s}_2) & \cdots & C_ z(\bm {s}_1-\bm {s}_ N) & 1 \\ C_ z(\bm {s}_2-\bm {s}_1) & C_ z(\bm {0}) & \cdots & C_ z(\bm {s}_2-\bm {s}_ N) & 1 \\ & & \ddots & & \\ C_ z(\bm {s}_ N-\bm {s}_1) & C_ z(\bm {s}_ N-\bm {s}_2) & \cdots & C_ z(\bm {0}) & 1 \\ 1 & 1 & \cdots & 1 & 0 \\ \end{array} \right) \]](images/statug_krige2d0191.png)
and
![\[ \mb{C_0} = \left( \begin{array}{c} C_ z(\bm {s}_0-\bm {s}_1) \\ C_ z(\bm {s}_0-\bm {s}_2) \\ \vdots \\ C_ z(\bm {s}_0-\bm {s}_ N) \\ 1 \\ \end{array} \right) \]](images/statug_krige2d0192.png)
The solution to the previous matrix equation is
![\[ \hat{\blambda }_\mb {0} = \mb{C^{-1}C_0} \]](images/statug_krige2d0193.png)
Using this solution for 
 and m, the ordinary kriging prediction at 
 is 
            
![\[ \hat{Z}(\bm {s}_0) = \lambda _1 Z(\bm {s}_1) + \cdots +\lambda _ N Z(\bm {s}_ N) \]](images/statug_krige2d0195.png)
with associated prediction error the square root of the variance
![\[ {\sigma _ z}^2(\bm {s}_0) = C_ z(\bm {0}) - {\blambda ^{\prime }} \mb{c_0} + m \]](images/statug_krige2d0196.png)
where 
 is 
 with the 1 in the last row removed, making it an 
 vector. 
            
These formulas are used in the best linear unbiased prediction (BLUP) of random variables (Robinson 1991). Further details are provided in Cressie (1993, pp. 119–123).
               
               Because of possible numeric problems when solving the previous matrix equation, Deutsch and Journel (1992) suggest replacing the last row and column of 1s in the preceding matrix 
 by 
, keeping the 0 in the 
 position and similarly replacing the last element in the preceding right-hand vector 
 with 
. This results in an equivalent system but avoids numeric problems when 
 is large or small relative to 1.