QR Call

CALL QR (q, r, piv, lindep, a <, ord> <, b> ) ;

The QR subroutine produces the QR decomposition of a matrix by using Householder transformations.

The QR subroutine returns the following values:

q

specifies an orthogonal matrix $\mathbf{Q}$ that is the product of the Householder transformations applied to the $m \times n$ matrix $\mathbf{A}$, if the b argument is not specified. In this case, the $\min (m,n)$ Householder transformations are applied, and q is an $m \times m$ matrix. If the b argument is specified, q is the $m \times p$ matrix $\mathbf{Q}^{\prime } \mathbf{B}$ that has the transposed Householder transformations $\mathbf{Q}^{\prime }$ applied on the $p$ columns of the argument matrix $\mathbf{B}$.

r

specifies a $\min (m,n) \times n$ upper triangular matrix $\mathbf{R}$ that is the upper part of the $m \times n$ upper triangular matrix $\widetilde{\mathbf{R}}$ of the QR decomposition of the matrix $\mathbf{A}$. The matrix $\widetilde{\mathbf{R}}$ of the QR decomposition can be obtained by vertical concatenation (by using the operator //) of the $(m - \min (m,n)) \times n$ zero matrix to the result matrix $\mathbf{R}$.

piv

specifies an $n \times 1$ vector of permutations of the columns of $\mathbf{A}$; that is, on return, the QR decomposition is computed, not of $\mathbf{A}$, but of the permuted matrix whose columns are $[\mathbf{A}_{piv[1]} \ldots \mathbf{A}_{piv[n]}]$. The vector piv corresponds to an $n \times n$ permutation matrix $\bPi $.

lindep

is the number of linearly dependent columns in matrix $\mathbf{A}$ detected by applying the $\min (m,n)$ Householder transformations in the order specified by the argument vector piv.

The input arguments to the QR subroutine are as follows:

a

specifies an $m \times n$ matrix $\mathbf{A}$ that is to be decomposed into the product of the orthogonal matrix $\mathbf{Q}$ and the upper triangular matrix $\widetilde{\mathbf{R}}$.

ord

specifies an optional $n \times 1$ vector that specifies the order of Householder transformations applied to matrix $\mathbf{A}$. When you specify the ord argument, the columns of $\mathbf{A}$ can be divided into the following groups:

ord[j]>0

Column $j$ of $\mathbf{A}$ is an initial column, meaning it has to be processed at the start in increasing order of ord[j]. This specification defines the first $n_ l$ columns of $\mathbf{A}$ that are to be processed.

ord[j]=0

Column $j$ of $\mathbf{A}$ is a pivot column, meaning it is to be processed in order of decreasing residual Euclidean norms. The pivot columns of $\mathbf{A}$ are processed after the $n_ l$ initial columns and before the $n_ u$ final columns.

ord[j]<0

Column $j$ of $\mathbf{A}$ is a final column, meaning it has to be processed at the end in decreasing order of ord[j]. This specification defines the last $n_ u$ columns of $\mathbf{A}$ that are to be processed. If $n>m$, some of these columns are not processed.

The default is ord[j]=j, in which case the Householder transformations are processed in the same order in which the columns are stored in matrix $\mathbf{A}$ (without pivoting).

b

specifies an optional $m \times p$ matrix $\mathbf{B}$ that is to be multiplied by the transposed $m \times m$ matrix $\mathbf{Q}^{\prime }$. If b is specified, the result q contains the $m \times p$ matrix $\mathbf{Q}^{\prime } \mathbf{B}$. If b is not specified, the result q contains the $m \times m$ matrix $\mathbf{Q}$.

The QR subroutine decomposes an $m \times n$ matrix $\mathbf{A}$ into the product of an $m \times m$ orthogonal matrix $\mathbf{Q}$ and an $m \times n$ upper triangular matrix $\widetilde{\mathbf{R}}$, so that

\[  \bA \bPi = \bQ \widetilde{\bR }, \bQ ^{\prime } \bQ = \bQ \bQ ^{\prime } = \bI _ m  \]

by means of $\min (m,n)$ Householder transformations.

The $m \times m$ orthogonal matrix $\mathbf{Q}$ is computed only if the last argument b is not specified, as in the following example:

call qr(q, r, piv, lindep, a, ord);

In many applications, the number of rows, $m$, is very large. In these cases, the explicit computation of the $m \times m$ matrix $\mathbf{Q}$ might require too much memory or time.

In the usual case where $m > n$,

\begin{eqnarray*}  \bA = \left[ \begin{array}{ccc} * &  * &  * \\ \mbox{}* &  * &  * \\ \mbox{}* &  * &  * \\ \mbox{}* &  * &  * \\ \mbox{}* &  * &  * \end{array} \right] & &  \bQ = \left[ \begin{array}{ccccc} * &  * &  * &  * &  * \\ \mbox{}* &  * &  * &  * &  * \\ \mbox{}* &  * &  * &  * &  * \\ \mbox{}* &  * &  * &  * &  * \\ \mbox{}* &  * &  * &  * &  * \end{array} \right] \\[0.10in] \widetilde{\bR } = \left[ \begin{array}{ccc} * &  * &  * \\ 0 &  * &  * \\ 0 &  0 &  * \\ 0 &  0 &  0 \\ 0 &  0 &  0 \end{array} \right] & &  \bR = \left[\begin{array}{ccc} * &  * &  * \\ 0 &  * &  * \\ 0 &  0 &  * \end{array} \right] \\[0.10in] \bQ = \left[ \bQ _1 ~  \bQ _2 \right], & &  \widetilde{\bR } = \left[ \begin{array}{c} \bR \\ \mb {0} \end{array} \right] \end{eqnarray*}

where $\mathbf{R}$ is the result returned by the QR subroutine.

The $n$ columns of matrix $\mathbf{Q}_1$ provide an orthonormal basis for the $n$ columns of $\mathbf{A}$ and are called the range space of $\mathbf{A}$. Since the $m-n$ columns of $\mathbf{Q}_2$ are orthogonal to the $n$ columns of $\mathbf{A}$, $\mathbf{Q}_2^{\prime } \mathbf{A} = \mb {0}$, they provide an orthonormal basis for the orthogonal complement of the columns of $\mathbf{A}$ and are called the null space of $\mathbf{A}$.

In the case where $m < n$,

\begin{eqnarray*}  \bA = \left[ \begin{array}{ccccc} * &  * &  * &  * &  * \\ \mbox{}* &  * &  * &  * &  * \\ \mbox{}* &  * &  * &  * &  * \end{array} \right] & &  \bQ = \left[ \begin{array}{ccc} * &  * &  * \\ \mbox{}* &  * &  * \\ \mbox{}* &  * &  * \end{array} \right] \\[0.10in] \widetilde{\bR } = \bR = \left[ \begin{array}{ccccc} * &  * &  * &  * &  * \\ 0 &  * &  * &  * &  * \\ 0 &  0 &  * &  * &  * \end{array} \right] \end{eqnarray*}

Specifying the argument ord as an $n$ vector lets you specify a special order of the columns in matrix $\mathbf{A}$ on which the Householder transformations are applied. There are two special cases:

  • If you do not specify the ord argument, the default values ord$[j] = j$ are used. In this case, Householder transformations are done in the same order in which the columns are stored in $\mathbf{A}$ (without pivoting).

  • If you set all components of ord to zero, the Householder transformations are done in order of decreasing Euclidean norms of the columns of $\mathbf{A}$.

To check the QR decomposition, use the following statements to compute the three residual sum of squares (represented by the variables SS0, SS1, and SS2), which should be close to zero:

a = shape(1:20, 5);
m = nrow(a); n = ncol(a);
ord = j(1, n, 0);
call qr(q, r, piv, lindep, a);
ss0 = ssq(a[ ,piv] - q[,1:n] * r);
ss1 = ssq(q * q` - i(m));
ss2 = ssq(q` * q - i(m));
print ss0 ss1 ss2;

Figure 24.282: Result of a QR Decomposition

ss0 ss1 ss2
6.231E-28 2.948E-31 2.862E-31


If the QR subroutine detects linearly dependent columns while processing matrix $\mathbf{A}$, the column order given in the result vector piv can differ from an explicitly specified order in the argument vector ord. If a column of $\mathbf{A}$ is found to be linearly dependent on columns already processed, this column is swapped to the end of matrix $\mathbf{A}$. The order of columns in the result matrix $\mathbf{R}$ corresponds to the order of columns processed in $\mathbf{A}$. The swapping of a linearly dependent column of $\mathbf{A}$ to the end of the matrix corresponds to the swapping of the same column in $\mathbf{R}$ and leads to a zero row at the end of the upper triangular matrix $\mathbf{R}$.

The scalar result lindep counts the number of linearly dependent columns that are detected in constructing the first $\min (m,n)$ Householder transformations in the order specified by the argument vector ord. The test of linear dependence depends on the singularity criterion, which is 1E$-$8 by default.

Solving the linear system $Rx = Q^{\prime } b$ with an upper triangular matrix $\mathbf{R}$ whose columns are permuted corresponding to the result vector piv leads to a solution $x$ with permuted components. You can reorder the components of $x$ by using the index vector piv at the left-hand side of an expression, as follows:

a = {3  0  0 -1,
     0  1  2  0,
     4 -4 -1  1,
    -1  2  3  4};
b = {-1, 8, -3, 28};

n = ncol(a); p = ncol(b);
ord = j(1, n, 0);
call qr(qtb, r, piv, lindep, a, ord, b);
print piv;

x = j(n,1);
x[piv] = inv(r) * qtb[1:n, 1:p];
print x;

Figure 24.283: Solution to a Linear System

piv
1 4 2 3

x
1
2
3
4


The Full-Rank Linear Least Squares Problem

This example solves the full-rank linear least squares problem. Specify the argument b as an $m \times p$ matrix $\mathbf{B}$, as follows:

call qr(q, r, piv, lindep, a, ord, b);

When you specify the b argument, the QR subroutine computes the matrix $Q^{\prime } B$ (instead of $Q$) as the result q. Now you can compute the $p$ least squares solutions $x_ k$ of an overdetermined linear system with an $ m \times n, m > n$ coefficient matrix $A$, rank($A$) = $n$, and $p$ right-hand sides b$_ k$ stored as the columns of the $m \times p$ matrix $B$:

\begin{equation*}  \min _{x_ k} \|  A x_ k - b_ k \| ^2, k = 1,\ldots ,p \end{equation*}

where $\|  \cdot \| $ is the Euclidean vector norm. This is accomplished by solving the $p$ upper triangular systems with back substitution:

\begin{equation*}  x_ k = Pi^{\prime } R^{-1}Q_1^{\prime } b_ k, k = 1,\ldots ,p \end{equation*}

For most applications, the number of rows of $\mathbf{A}$, $m$, is much larger than $n$, the number of columns of $\mathbf{A}$, or $p$, the number of right-hand sides. In these cases, you are advised not to compute the large $m \times m$ matrix $\mathbf{Q}$ (which can consume too much memory and time) if you can solve your problem by computing only the smaller $m \times p$ matrix $\mathbf{Q}^{\prime } \mathbf{B}$ implicitly.

For example, use the first five columns of the $6 \times 6$ Hilbert matrix $\mathbf{A}$, as follows:

a= {  36      -630      3360     -7560      7560     -2772,
    -630     14700    -88200    211680   -220500     83160,
    3360    -88200    564480  -1411200   1512000   -582120,
   -7560    211680  -1411200   3628800  -3969000   1552320,
    7560   -220500   1512000  -3969000   4410000  -1746360,
   -2772     83160   -582120   1552320  -1746360    698544 };
aa = a[, 1:5];
b= { 463, -13860, 97020, -258720, 291060, -116424};

m = nrow(aa); n = ncol(aa); p = ncol(b);
call qr(qtb, r, piv, lindep, aa, , b);

if lindep=0 then do;
   x=inv(r)*qtb[1:n];
   print x;  /* x solves aa*x=b */
end;
else /* handle linear dependence */;

Figure 24.284: Solution to Least Squares Problem

x
1
0.5
0.3333333
0.25
0.2


Note that you are using only the first $n$ rows, $\mathbf{Q}^{\prime }_1\mathbf{B}$, of the qtb matrix. The IF-THEN statement of the preceding example can be replaced by the more efficient TRISOLV function:

if lindep=0 then
   x = trisolv(1, r, qtb[1:n], piv);

For information about solving rank-deficient linear least squares problems, see the RZLIND call.