This chapter documents direct and iterative algorithms for large sparse systems of linear equations:
where is a nonsingular square matrix.
The ITSOLVER call supports the following classes of iterative solvers:
conjugate gradient for symmetric positive-definite systems
conjugate gradient squared for general nonsingular systems
minimum residual for symmetric indefinite systems
biconjugate gradient for general nonsingular systems
Iterative algorithms incur zero or controlled amounts of fill-in, have relatively small working memory requirements, and can converge as fast as or versus direct dense methods that are typically Each iteration of an iterative algorithm is very inexpensive and typically involves a single matrix-vector multiplication and a pair of forward/backward substitutions.
Convergence of an iterative method depends upon the distribution of eigenvalues for the matrix , and can be rather slow for badly conditioned matrices. For such cases SAS/IML offers hybrid algorithms, which combine an incomplete factorization (a modified direct method) used in the preconditioning phase with an iterative refinement procedure. The following preconditioners are supported:
incomplete Cholesky factorization ( “IC”)
diagonal Jacobi preconditioner (“DIAG”)
modified incomplete LU factorization (“MILU”)
For more information, see the description of the precond parameter in the section Input Data Description.
The SOLVELIN call supports the following direct sparse solvers for symmetric positive-definite systems:
symbolic LDL
Cholesky
Classical factorization-based algorithms share one common complication: the matrix usually suffers fill-in, which means additional operations and computer memory are required to complete the algorithm. A symmetric permutation of
matrix rows and columns can lead to a dramatic reduction of fill-in. To compute such a permutation, SAS/IML implements a minimum
degree ordering algorithm, which is an automatic step in the SOLVELIN
function.