Sparse Matrix Algorithms


Overview

This chapter documents direct and iterative algorithms for large sparse systems of linear equations:

\[ Ax = b,\quad A\in R^{n\times n},\; x,b\in R^ n \]

where A is a nonsingular square matrix.

The ITSOLVER call supports the following classes of iterative solvers:

$\bullet $

conjugate gradient for symmetric positive-definite systems

$\bullet $

conjugate gradient squared for general nonsingular systems

$\bullet $

minimum residual for symmetric indefinite systems

$\bullet $

biconjugate gradient for general nonsingular systems

Iterative algorithms incur zero or controlled amounts of fill-in, have relatively small working memory requirements, and can converge as fast as $O(n)$ or $O(n^2)$ versus direct dense methods that are typically $O(n^3).$ Each iteration of an iterative algorithm is very inexpensive and typically involves a single matrix-vector multiplication and a pair of forward/backward substitutions.

Convergence of an iterative method depends upon the distribution of eigenvalues for the matrix A, and can be rather slow for badly conditioned matrices. For such cases SAS/IML offers hybrid algorithms, which combine an incomplete factorization (a modified direct method) used in the preconditioning phase with an iterative refinement procedure. The following preconditioners are supported:

$\bullet $

incomplete Cholesky factorization ( "IC")

$\bullet $

diagonal Jacobi preconditioner ("DIAG")

$\bullet $

modified incomplete LU factorization ("MILU")

For more information, see the description of the precond parameter in the section Input Data Description.

The SOLVELIN call supports the following direct sparse solvers for symmetric positive-definite systems:

$\bullet $

symbolic LDL

$\bullet $

Cholesky

Classical factorization-based algorithms share one common complication: the matrix A usually suffers fill-in, which means additional operations and computer memory are required to complete the algorithm. A symmetric permutation of matrix rows and columns can lead to a dramatic reduction of fill-in. To compute such a permutation, SAS/IML implements a minimum degree ordering algorithm, which is an automatic step in the SOLVELIN function.