Language Reference


COMPORT Call

CALL COMPORT (q, r, p, piv, lindep, a <, b> <, sing> );

The COMPORT subroutine provides the complete orthogonal decomposition by Householder transformations of a matrix $\bA $.

The subroutine returns the following values:

q

is a matrix. If b is not specified, q is the $m \times m$ orthogonal matrix $\bQ $ that is the product of the $\min (m,n)$ separate Householder transformations. If b is specified, q is the $m \times p$ matrix ${\bQ }^{\prime } \bB $ that has the transposed Householder transformations ${\bQ }^{\prime }$ applied to the p columns of the argument matrix $\bB $.

r

is the $n \times n$ upper triangular matrix $\bR $ that contains the $r \times r$ nonsingular upper triangular matrix ${\bL }^{\prime }$ of the complete orthogonal decomposition, where $r\leq n$ is the rank of $\bA $. The full $m \times n$ upper triangular matrix $\bR $ of the orthogonal decomposition of matrix $\bA $ can be obtained by vertical concatenation of the $(m-n) \times n$ zero matrix to the result r.

p

is an $n \times n$ matrix that is the product $\bP \bPi $ of a permutation matrix $\bPi $ with an orthogonal matrix $\bP $. The permutation matrix is determined by the vector piv.

piv

is an $n \times 1$ vector of permutations of the columns of $\bA $. That is, the QR decomposition is computed, not of $\bA $, but of the matrix with columns $[\bA _{piv[1]}\ldots \bA _{piv[n]}]$. The vector piv corresponds to an $n \times n$ permutation matrix, $\bPi $, of the pivoted QR decomposition in the first step of orthogonal decomposition.

lindep

specifies the number of linearly dependent columns in the matrix $\bA $ detected by applying the r Householder transformation in the order specified by the argument piv. That is, lindep is $n-r$.

The input arguments to the COMPORT subroutine are as follows:

a

specifies the $m \times n$ matrix $\bA $, with $m\geq n$, which is to be decomposed into the product of the $m \times m$ orthogonal matrix $\bQ $, the $n \times n$ upper triangular matrix $\bR $, and the $n \times n$ orthogonal matrix $\bP $,

\[ \bA = \bQ \left[ \begin{array}{c} \bR \\ \mb{0} \end{array} \right] \bPi ^{\prime } \bP ^{\prime } \bPi \]
b

specifies an optional $m \times p$ matrix $\bB $ that is to be left-multiplied by the transposed $m \times m$ matrix $\bQ ^{\prime }$.

sing

is an optional scalar that specifies a singularity criterion.

The complete orthogonal decomposition of the singular matrix $\bA $ can be used to compute the Moore-Penrose inverse ${\bA }^-$, $r = \mr{rank}(\bA ) < n$, or to compute the minimum Euclidean-norm solution of the rank-deficient least squares problem $\| \bA \mb{x}- \mb{b}\| ^2_2$.

  1. Use the QR decomposition of ${\bA }$ with column pivoting,

    \[ \bA = \bQ \left[ \begin{array}{c} \bR \\ \mb{0} \end{array} \right] \bPi ^{\prime } = \left[ \begin{array}{cc} \bY & \bZ \end{array} \right] \left[ \begin{array}{cc} \bR _1 & \bR _2 \\ \mb{0} & \mb{0} \end{array} \right] \bPi ^{\prime } \]

    where $\bR = [ \begin{array}{cc} {\bR }_1 &  {\bR }_2 \end{array}] \in {\mathcal R}^{r \times t}$ is upper trapezoidal, ${\bR }_1\in {\mathcal R}^{r \times r}$ is upper triangular and invertible, ${\bR }_2\in {\mathcal R}^{r \times s}$, ${\bQ } = [\begin{array}{cc} {\bY } &  {\bZ } \end{array}]$ is orthogonal, ${\bY }\in {\mathcal R}^{t \times r}$, ${\bZ }\in {\mathcal R}^{t \times s}$, and ${\bPi }$ permutes the columns of ${\bA }$.

  2. Use the transpose ${\bL }_{12}$ of the upper trapezoidal matrix ${\bR } = \left[ \begin{array}{cc} {\bR }_1 &  {\bR }_2 \end{array} \right]$,

    \[ {\bL }_{12} = \left[ \begin{array}{c} {\bL }_1 \\ {\bL }_2 \end{array} \right] = {\bR }^{\prime } \in {\mathcal R}^{t \times r} \]

    with $\mr{rank}({\bL }_{12}) = \mr{rank}({\bL }_1) = r$, ${\bL }_1\in {\mathcal R}^{r \times r}$ lower triangular, ${\bL }_2\in {\mathcal R}^{s \times r}$. The lower trapezoidal matrix ${\bL }_{12}\in {\mathcal R}^{t \times r}$ is premultiplied with r Householder transformations ${\bP }_1,\ldots ,{\bP }_ r$,

    \[ \bP _ r \ldots \bP _1 \left[ \begin{array}{c} \bL _1 \\ \bL _2 \end{array} \right] = \left[ \begin{array}{c} \bL \\ \mb{0} \end{array} \right] \]

    each zeroing out one of the r columns of ${\bL }_2$ and producing the nonsingular lower triangular matrix ${\bL }\in {\mathcal R}^{r \times r}$. Therefore, you obtain

    \[ \bA = \bQ \left[ \begin{array}{cc} \bL ^{\prime } & \mb{0} \\ \mb{0} & \mb{0} \end{array} \right]\bPi ^{\prime } \bP ^{\prime } = \bY \left[ \begin{array}{cc} \bL ^{\prime } & \mb{0} \end{array} \right] \bPi ^{\prime } \bP ^{\prime } \]

    with $\bP = {\bPi }{\bP }_ r\ldots {\bP }_1\in {\mathcal R}^{t \times t}$ and upper triangular ${\bL }^{\prime }$. This second step is described in Golub and Van Loan (1989).

  3. Compute the Moore-Penrose inverse ${\bA }^-$ explicitly:

    \[ \bA ^- = \bP \bPi \left[ \begin{array}{cc} (\bL ^{\prime })^{-1} & \mb{0} \\ \mb{0} & \mb{0} \end{array} \right] \bQ ^{\prime } = \bP \bPi \left[ \begin{array}{c} (\bL ^{\prime })^{-1} \\ \mb{0} \end{array} \right] \bY ^{\prime } \]
    • Obtain ${\bY }$ in ${\bQ } = \left[\begin{array}{cc} {\bY } &  {\bZ } \end{array}\right]$ explicitly by applying the r Householder transformations obtained in the first step to $\left[ \begin{array}{c} {\bI }_ r \\ \mb{0} \end{array} \right]$.

    • Solve the $r \times r$ lower triangular system $({\bL }^{\prime })^{-1}{\bY }^{\prime }$ with t right-hand sides by using backward substitution, which yields an $r \times t$ intermediate matrix.

    • Left-apply the r Householder transformations in ${\bP }$ on the $r \times t$ intermediate matrix $\left[ \begin{array}{c} ({\bL }^{\prime })^{-1}{\bY }^{\prime } \\ \mb{0} \\ \end{array} \right]$, which results in the symmetric matrix ${\bA }^-\in {\mathcal R}^{t \times t}$.

The GINV function computes the Moore-Penrose inverse ${\bA }^-$ by using the singular value decomposition of ${\bA }$. Using complete orthogonal decomposition to compute ${\bA }^-$ usually requires far fewer floating-point operations. However, it can be slightly more sensitive to rounding errors, which can disturb the detection of the true rank of ${\bA }$, than the singular value decomposition.

The following example demonstrates some uses of the COMPORT subroutine:

/* Only four linearly independent columns */
A = {1 0 1 0 0,
     1 0 0 1 0,
     1 0 0 0 1,
     0 1 1 0 0,
     0 1 0 1 0,
     0 1 0 0 1 };
m = nrow(A);
n = ncol(A);

call comport(q,r,p,piv,lindep,A);
fullR = r // j(m-n, n, 0);
perm = i(n);
perm[piv,] = i(n);

/* recover A from factorization */
A2 = q*fullR*p`*perm`;
reset fuzz;
print A2;

/* compute Moore-Penrose generalized inverse */
rankA = n - lindep;
subR = R[1:rankA, 1:rankA];
fullRinv = j(n, n, 0);
fullRinv[1:rankA, 1:rankA] = inv(subR);
Ainv = perm*p*fullRinv*q[,1:n]`;
print Ainv;

/* verify generalized inverse */
eps = 1e-12;
if any(A*Ainv*A-A > eps) |
   any(Ainv*A*Ainv-Ainv > eps) |
   any((A*Ainv)`-A*Ainv > eps) |
   any((Ainv*A)`-Ainv*A > eps) then
   msg = "Pseudoinverse conditions not satisfied";
else
   msg = "Pseudoinverse conditions satisfied";
print msg;

Figure 25.75: Results from a Complete Orthogonal Factorization

A2
1 0 1 0 0
1 0 0 1 0
1 0 0 0 1
0 1 1 0 0
0 1 0 1 0
0 1 0 0 1

Ainv
0.2666667 0.2666667 0.2666667 -0.066667 -0.066667 -0.066667
-0.066667 -0.066667 -0.066667 0.2666667 0.2666667 0.2666667
0.4 -0.1 -0.1 0.4 -0.1 -0.1
-0.1 0.4 -0.1 -0.1 0.4 -0.1
-0.1 -0.1 0.4 -0.1 -0.1 0.4

msg
Pseudoinverse conditions satisfied