ITSOLVER Call

CALL ITSOLVER (x, error, iter, method, A, b <*>, precon <*>, tol <*>, maxiter <*>, start <*>, history ) ;

The ITSOLVER subroutine solves a sparse linear system by using iterative methods.

The ITSOLVER call returns the following values:

x

is the solution to Ax=b.

error

is the final relative error of the solution.

iter

is the number of iterations executed.

The input arguments to the ITSOLVER call are as follows:

method

is the type of iterative method to use. The following values are valid:

CG

specifies a conjugate gradient algorithm. The matrix A must be symmetric and positive definite.

CGS

specifies a conjugate gradient squared algorithm, for general A.

MINRES

specifies a minimum residual algorithm, when A is symmetric indefinite.

BICG

specifies a biconjugate gradient algorithm, for general A.

A

is the sparse coefficient matrix in the equation Ax=b. You can use SPARSE function to convert a matrix from dense to sparse storage.

b

is a column vector, the right side of the equation Ax=b.

precon

is the name of a preconditioning technique to use. The following values are valid:

NONE

specifies no preconditioning. This is the default behavior if the argument is not specified.

IC

specifies an incomplete Cholesky factorization. Specify this value when you specify CG or MINRES for the method argument.

DIAG

specifies a diagonal Jacobi preconditioner. Specify this value when you specify CG or MINRES for the method argument.

MILU

specifies a modified incomplete LU factorization. Specify this value when you specify BICG for the method argument.

tol

is the relative error tolerance.

maxiter

is the iteration limit.

start

is a starting point column vector.

history

is a matrix to store the relative error at each iteration.

The ITSOLVER call solves a sparse linear system by iterative methods, which involve updating a trial solution over successive iterations to minimize the error. The ITSOLVER call uses the technique specified in the method parameter to update the solution.

The input matrix $A$ represents the coefficient matrix in sparse format; it is an $n$ $\times $ 3 matrix, where $n$ is the number of nonzero elements. The first column contains the nonzero values, and the second and third columns contain the row and column locations for the nonzero elements, respectively. For the algorithms that assume symmetric $A$, only the lower triangular elements should be specified. The algorithm continues iterating to improve the solution until either the relative error tolerance specified in tol is satisfied or the maximum number of iterations specified in maxiter is reached. The relative error is defined as

\[  \mbox{error} = \| Ax - b\| _2 / (\|  b \| _2 + \epsilon )  \]

where the $\|  \cdot \| _2$ operator is the Euclidean norm and $\epsilon $ is a machine-dependent epsilon value to prevent any division by zero. If tol or maxiter is not specified in the call, then a default value of $10^{-7}$ is used for tol and 100,000 for maxiter.

The convergence of an iterative algorithm can often be enhanced by preconditioning the input coefficient matrix. The preconditioning option is specified with the precon parameter.

A starting trial solution can be specified with the start parameter; otherwise the ITSOLVER call generates a zero starting point. You can supply a matrix to store the relative error at each iteration with the history parameter. The history matrix should be dimensioned with enough elements to store the maximum number of iterations you expect.

You should always check the returned error and iter parameters to verify that the desired relative error tolerance is reached. If the tolerance is not reached, the program might continue the solution process with another ITSOLVER call, with start set to the latest result. You might also try a different precon option to enhance convergence.

For example, the following linear system has a coefficient matrix that contains several zeros:

\[  \left[ \begin{array}{llrl} 3 &  2 &  0 &  0 \\ 1.1 &  4 &  1 &  3.2 \\ 0 &  1 &  -10 &  0 \\ 0 &  3.2 &  0 &  3 \end{array} \right] x = \left[ \begin{array}{l} 1 \\ 1 \\ 1 \\ 1 \end{array} \right]  \]

You can represent the matrix in sparse form and use the biconjugate gradient algorithm to solve the linear system, as shown in the following statements:

/* value     row column */
A = {  3       1      1,
       2       1      2,
       1.1     2      1,
       4       2      2,
       1       3      2,
       3.2     4      2,
     -10       3      3,
       3       4      4};

/* right hand side */
b = {1, 1, 1, 1};
maxiter = 10;
hist = j(maxiter,1,.);
start = {1,1,1,1};
tol = 1e-10;
call itsolver(x, error, iter, "bicg", A, b, "milu", tol,
maxiter, start, hist);
print x;
print iter error;
print hist;

Figure 24.176: Solution of a Linear System

x
0.2040816
0.1938776
-0.080612
0.1265306

iter error
3 5.011E-16

hist
0.0254375
0.0080432
5.011E-16
.
.
.
.
.
.
.


The following linear system also has a coefficient matrix with several zeros:

\[  \left[ \begin{array}{llrl} 3 &  1.1 &  0 &  0 \\ 1.1 &  4 &  1 &  3.2 \\ 0 &  1 &  10 &  0 \\ 0 &  3.2 &  0 &  3 \end{array} \right] x = \left[ \begin{array}{l} 1 \\ 1 \\ 1 \\ 1 \end{array} \right]  \]

The following statements represent the matrix in sparse form and use the conjugate gradient algorithm solve the symmetric positive definite system:

/* value     row column */
A = { 3       1      1,
      1.1     2      1,
      4       2      2,
      1       3      2,
      3.2     4      2,
     10       3      3,
      3       4      4};

/* right hand side */
b = {1, 1, 1, 1};
call itsolver(x, error, iter, "CG", A, b);
print x, iter, error;

Figure 24.177: Solution to Sparse System

x
2.68
-6.4
0.74
7.16

iter
4

error
2.847E-15