Language Reference


QR Call

CALL QR (q, r, piv, lindep, a <, ord> <, b> );

The QR subroutine produces the QR decomposition of a matrix by using Householder transformations.

The QR subroutine returns the following values:

q

specifies an orthogonal matrix $\bQ $ that is the product of the Householder transformations applied to the $m \times n$ matrix $\bA $, if the b argument is not specified. In this case, the $\min (m,n)$ Householder transformations are applied, and q is an $m \times m$ matrix. If the b argument is specified, q is the $m \times p$ matrix $\bQ ^{\prime } \bB $ that has the transposed Householder transformations $\bQ ^{\prime }$ applied on the p columns of the argument matrix $\bB $.

r

specifies a $\min (m,n) \times n$ upper triangular matrix $\bR $ that is the upper part of the $m \times n$ upper triangular matrix $\widetilde{\bR }$ of the QR decomposition of the matrix $\bA $. The matrix $\widetilde{\bR }$ of the QR decomposition can be obtained by vertical concatenation (by using the operator //) of the $(m - \min (m,n)) \times n$ zero matrix to the result matrix $\bR $.

piv

specifies an $n \times 1$ vector of permutations of the columns of $\bA $; that is, on return, the QR decomposition is computed, not of $\bA $, but of the permuted matrix whose columns are $[\bA _{piv[1]} \ldots \bA _{piv[n]}]$. The vector piv corresponds to an $n \times n$ permutation matrix $\bPi $.

lindep

is the number of linearly dependent columns in matrix $\bA $ detected by applying the $\min (m,n)$ Householder transformations in the order specified by the argument vector piv.

The input arguments to the QR subroutine are as follows:

a

specifies an $m \times n$ matrix $\bA $ that is to be decomposed into the product of the orthogonal matrix $\bQ $ and the upper triangular matrix $\widetilde{\bR }$.

ord

specifies an optional $n \times 1$ vector that specifies the order of Householder transformations applied to matrix $\bA $. When you specify the ord argument, the columns of $\bA $ can be divided into the following groups:

ord[j]>0

Column j of $\bA $ is an initial column, meaning it has to be processed at the start in increasing order of ord[j]. This specification defines the first $n_ l$ columns of $\bA $ that are to be processed.

ord[j]=0

Column j of $\bA $ is a pivot column, meaning it is to be processed in order of decreasing residual Euclidean norms. The pivot columns of $\bA $ are processed after the $n_ l$ initial columns and before the $n_ u$ final columns.

ord[j]<0

Column j of $\bA $ is a final column, meaning it has to be processed at the end in decreasing order of ord[j]. This specification defines the last $n_ u$ columns of $\bA $ that are to be processed. If $n>m$, some of these columns are not processed.

The default is ord[j]=j, in which case the Householder transformations are processed in the same order in which the columns are stored in matrix $\bA $ (without pivoting).

b

specifies an optional $m \times p$ matrix $\bB $ that is to be multiplied by the transposed $m \times m$ matrix $\bQ ^{\prime }$. If b is specified, the result q contains the $m \times p$ matrix $\bQ ^{\prime } \bB $. If b is not specified, the result q contains the $m \times m$ matrix $\bQ $.

The QR subroutine decomposes an $m \times n$ matrix $\bA $ into the product of an $m \times m$ orthogonal matrix $\bQ $ and an $m \times n$ upper triangular matrix $\widetilde{\bR }$, so that

\[ \bA \bPi = \bQ \widetilde{\bR }, \bQ ^{\prime } \bQ = \bQ \bQ ^{\prime } = \bI _ m \]

by means of $\min (m,n)$ Householder transformations.

The $m \times m$ orthogonal matrix $\bQ $ is computed only if the last argument b is not specified, as in the following example:

call qr(q, r, piv, lindep, a, ord);

In many applications, the number of rows, m, is very large. In these cases, the explicit computation of the $m \times m$ matrix $\bQ $ might require too much memory or time.

In the usual case where $m > n$,

\begin{eqnarray*} \bA = \left[ \begin{array}{ccc} * & * & * \\ \mbox{}* & * & * \\ \mbox{}* & * & * \\ \mbox{}* & * & * \\ \mbox{}* & * & * \end{array} \right] & & \bQ = \left[ \begin{array}{ccccc} * & * & * & * & * \\ \mbox{}* & * & * & * & * \\ \mbox{}* & * & * & * & * \\ \mbox{}* & * & * & * & * \\ \mbox{}* & * & * & * & * \end{array} \right] \\[0.10in] \widetilde{\bR } = \left[ \begin{array}{ccc} * & * & * \\ 0 & * & * \\ 0 & 0 & * \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right] & & \bR = \left[\begin{array}{ccc} * & * & * \\ 0 & * & * \\ 0 & 0 & * \end{array} \right] \\[0.10in] \bQ = \left[ \bQ _1 ~ \bQ _2 \right], & & \widetilde{\bR } = \left[ \begin{array}{c} \bR \\ \mb{0} \end{array} \right] \end{eqnarray*}

where $\bR $ is the result returned by the QR subroutine.

The n columns of matrix $\bQ _1$ provide an orthonormal basis for the n columns of $\bA $ and are called the range space of $\bA $. Since the $m-n$ columns of $\bQ _2$ are orthogonal to the n columns of $\bA $, $\bQ _2^{\prime } \bA = \mb{0}$, they provide an orthonormal basis for the orthogonal complement of the columns of $\bA $ and are called the null space of $\bA $.

In the case where $m < n$,

\begin{eqnarray*} \bA = \left[ \begin{array}{ccccc} * & * & * & * & * \\ \mbox{}* & * & * & * & * \\ \mbox{}* & * & * & * & * \end{array} \right] & & \bQ = \left[ \begin{array}{ccc} * & * & * \\ \mbox{}* & * & * \\ \mbox{}* & * & * \end{array} \right] \\[0.10in] \widetilde{\bR } = \bR = \left[ \begin{array}{ccccc} * & * & * & * & * \\ 0 & * & * & * & * \\ 0 & 0 & * & * & * \end{array} \right] \end{eqnarray*}

Specifying the argument ord as an n vector lets you specify a special order of the columns in matrix $\bA $ on which the Householder transformations are applied. There are two special cases:

  • If you do not specify the ord argument, the default values ord$[j] = j$ are used. In this case, Householder transformations are done in the same order in which the columns are stored in $\bA $ (without pivoting).

  • If you set all components of ord to zero, the Householder transformations are done in order of decreasing Euclidean norms of the columns of $\bA $.

To check the QR decomposition, use the following statements to compute the three residual sum of squares (represented by the variables SS0, SS1, and SS2), which should be close to zero:

a = shape(1:20, 5);
m = nrow(a); n = ncol(a);
ord = j(1, n, 0);
call qr(q, r, piv, lindep, a);
ss0 = ssq(a[ ,piv] - q[,1:n] * r);
ss1 = ssq(q * q` - i(m));
ss2 = ssq(q` * q - i(m));
print ss0 ss1 ss2;

Figure 25.287: Result of a QR Decomposition

ss0 ss1 ss2
6.231E-28 2.948E-31 2.862E-31



If the QR subroutine detects linearly dependent columns while processing matrix $\bA $, the column order given in the result vector piv can differ from an explicitly specified order in the argument vector ord. If a column of $\bA $ is found to be linearly dependent on columns already processed, this column is swapped to the end of matrix $\bA $. The order of columns in the result matrix $\bR $ corresponds to the order of columns processed in $\bA $. The swapping of a linearly dependent column of $\bA $ to the end of the matrix corresponds to the swapping of the same column in $\bR $ and leads to a zero row at the end of the upper triangular matrix $\bR $.

The scalar result lindep counts the number of linearly dependent columns that are detected in constructing the first $\min (m,n)$ Householder transformations in the order specified by the argument vector ord. The test of linear dependence depends on the singularity criterion, which is 1E$-$8 by default.

Solving the linear system $Rx = Q^{\prime } b$ with an upper triangular matrix $\bR $ whose columns are permuted corresponding to the result vector piv leads to a solution x with permuted components. You can reorder the components of x by using the index vector piv at the left-hand side of an expression, as follows:

a = {3  0  0 -1,
     0  1  2  0,
     4 -4 -1  1,
    -1  2  3  4};
b = {-1, 8, -3, 28};

n = ncol(a); p = ncol(b);
ord = j(1, n, 0);
call qr(qtb, r, piv, lindep, a, ord, b);
print piv;

x = j(n,1);
x[piv] = inv(r) * qtb[1:n, 1:p];
print x;

Figure 25.288: Solution to a Linear System

piv
1 4 2 3

x
1
2
3
4



The Full-Rank Linear Least Squares Problem

This example solves the full-rank linear least squares problem. Specify the argument b as an $m \times p$ matrix $\bB $, as follows:

call qr(q, r, piv, lindep, a, ord, b);

When you specify the b argument, the QR subroutine computes the matrix $Q^{\prime } B$ (instead of Q) as the result q. Now you can compute the p least squares solutions $x_ k$ of an overdetermined linear system with an $ m \times n, m > n$ coefficient matrix A, rank(A) = n, and p right-hand sides b$_ k$ stored as the columns of the $m \times p$ matrix B:

\begin{equation*} \min _{x_ k} \| A x_ k - b_ k \| ^2, k = 1,\ldots ,p \end{equation*}

where $\|  \cdot \| $ is the Euclidean vector norm. This is accomplished by solving the p upper triangular systems with back substitution:

\begin{equation*} x_ k = Pi^{\prime } R^{-1}Q_1^{\prime } b_ k, k = 1,\ldots ,p \end{equation*}

For most applications, the number of rows of $\bA $, m, is much larger than n, the number of columns of $\bA $, or p, the number of right-hand sides. In these cases, you are advised not to compute the large $m \times m$ matrix $\bQ $ (which can consume too much memory and time) if you can solve your problem by computing only the smaller $m \times p$ matrix $\bQ ^{\prime } \bB $ implicitly.

For example, use the first five columns of the $6 \times 6$ Hilbert matrix $\bA $, as follows:

a= {  36      -630      3360     -7560      7560     -2772,
    -630     14700    -88200    211680   -220500     83160,
    3360    -88200    564480  -1411200   1512000   -582120,
   -7560    211680  -1411200   3628800  -3969000   1552320,
    7560   -220500   1512000  -3969000   4410000  -1746360,
   -2772     83160   -582120   1552320  -1746360    698544 };
aa = a[, 1:5];
b= { 463, -13860, 97020, -258720, 291060, -116424};

m = nrow(aa); n = ncol(aa); p = ncol(b);
call qr(qtb, r, piv, lindep, aa, , b);

if lindep=0 then do;
   x=inv(r)*qtb[1:n];
   print x;  /* x solves aa*x=b */
end;
else /* handle linear dependence */;

Figure 25.289: Solution to Least Squares Problem

x
1
0.5
0.3333333
0.25
0.2



Note that you are using only the first n rows, $\bQ ^{\prime }_1\bB $, of the qtb matrix. The IF-THEN statement of the preceding example can be replaced by the more efficient TRISOLV function :

if lindep=0 then
   x = trisolv(1, r, qtb[1:n], piv);

For information about solving rank-deficient linear least squares problems, see the RZLIND call .