Sparse Matrix Algorithms

Iterative Methods

The conjugate gradient algorithm can be interpreted as the following optimization problem: minimize \phi (x) defined by

\phi (x) = 1/2 x^t ax - x^t b
where {b\in r^n} and a\in r^{nx n} are symmetric and positive definite.

At each iteration \phi (x) is minimized along an a-conjugate direction, constructing orthogonal residuals:

r_i \;\bot\; {\cal k}_i(a;\, r_0), r_i = a x_i - b
where {\cal k}_i is a Krylov subspace:
{\cal k}_i\,(a; r) = {\rm span} \{r,\, ar,\, a^2r,\,  ... ,\, a^{i - 1}r\}

Minimum residual algorithms work by minimizing the Euclidean norm \vert ax - b\vert _2 over {\cal k}_i. At each iteration, x_i is the vector in {\cal k}_i that gives the smallest residual.

The biconjugate gradient algorithm belongs to a more general class of Petrov-Galerkin methods, where orthogonality is enforced in a different i-dimensional subspace (x_i remains in {\cal k}_i):

r_i \;\bot\; \{w,\, a^t w,\, (a^t)^2 w,\,  ... ,\, (a^t)^{\,i - 1} w\}


Input Data Description

Example: Conjugate Gradient Algorithm

Example: Minimum Residual Algorithm

Example: Biconjugate Gradient Algorithm

Previous Page | Next Page | Top of Page