ORTVEC Call

CALL ORTVEC (w, r, rho, lindep, v <, q> ) ;

The ORVEC subroutine provides columnwise orthogonalization and stepwise QR decomposition by using the Gram-Schmidt process.

The ORTVEC subroutine returns the following values:

w

is an $m \times 1$ vector. If the Gram-Schmidt process converges (lindep=0), w is orthonormal to the columns of $Q$, which is assumed to have $n \leq m$ (nearly) orthonormal columns. If the Gram-Schmidt process does not converge (lindep=1), w is a vector of missing values. For stepwise QR decomposition, w is the $(n+1)$th orthogonal column of the matrix $Q$. If the q argument is not specified, w is the normalized value of the vector v,

\[  w = \frac{v}{\sqrt {v^{\prime } v}}  \]
r

is a $n \times 1$ vector. If the Gram-Schmidt process converges (lindep=0), r contains Fourier coefficients. If the Gram-Schmidt process does not converge (lindep=1), r is a vector of missing values. If the q argument is not specified, r is a vector with zero dimension. For stepwise QR decomposition, r contains the $n$ upper triangular elements of the $(n+1)$th column of $R$.

rho

is a scalar value. If the Gram-Schmidt process converges (lindep=0), rho specifies the distance from w to the range of $Q$. Even if the Gram-Schmidt process converges, if rho is sufficiently small, the vector v can be linearly dependent on the columns of $Q$. If the Gram-Schmidt process does not converge (lindep=1), rho is set to 0. For stepwise QR decomposition, rho contains the diagonal element of the $(n+1)$th column of $R$. In formulas, the value rho is denoted by $\rho $.

lindep

returns a value of 1 if the Gram-Schmidt process does not converge in 10 iterations. A value of 1 often indicates that the input vector v is linearly dependent on the $n$ columns of the input matrix $Q$. In that case, rho is set to 0, and the results w and r contain missing values. If lindep=0, the Gram-Schmidt process did converge, and the results w, r, and rho are computed.

The input arguments to the ORTVEC subroutine are as follows:

v

specifies an $m \times 1$ vector v that is to be orthogonalized to the $n$ columns of $Q$. For stepwise QR decomposition of a matrix, v is the $(n+1)$th matrix column before its orthogonalization.

q

specifies an optional $m \times n$ matrix $Q$ that is assumed to have $n \leq m$ (nearly) orthonormal columns. Thus, the $n \times n$ matrix $Q^{\prime } Q$ should approximate the identity matrix. The column orthonormality assumption is not tested in the ORTVEC call. If it is violated, the results are not predictable. The argument q can be omitted or can have zero rows and columns. For stepwise QR decomposition of a matrix, q contains the first $n$ matrix columns that are already orthogonal.

The relevant formula for the ORTVEC subroutine is

\[  v = \mb {Qr} + \rho w  \]

In the formula, if the $m \times n$ matrix $Q$ has $n$ (nearly) orthonormal columns, the vector v is orthogonal to the columns of $Q$ and $\rho $ is the distance from w to the range of $Q$.

There are two special cases:

  • If $m > n$, ORTVEC normalizes the result w, so that w${\prime }$ w$ = 1$.

  • If $m = n$, the output vector w is the null vector.

The case $m < n$ is not possible since $Q$ is assumed to have $n$ (nearly) orthonormal columns.

To initialize a stepwise QR decomposition, the ORTVEC subroutine can be called to normalize v only (that is, to compute $w = v / \sqrt {v^{\prime } v}$ and $\rho = \sqrt {v^{\prime } v}$). There are two ways to accomplish this:

  • Omit the last argument q, as in call ortvec(w,r,rho,lindep,v);.

  • Provide a matrix q with zero rows and columns (for example, by using q={}).

In both cases, r is a column vector with zero rows.

The ORTVEC subroutine is useful for the following applications:

  • performing stepwise QR decomposition. Compute $Q$ and $R$, so that $A = QR$, where $Q$ is column orthonormal, $Q^{\prime } Q = I$, and $R$ is upper triangular. The $j$th step is applied to the $j$th column, v, of $A$, and it computes the $j$th column w of $Q$ and the $j$th column, $(r \;  \rho \;  0 )^{\prime }$, of $R$.

  • computing the $m \times (m-n)$ null space matrix, $Q_2$, that corresponds to an $m \times n$ range space matrix, $Q_1$ $(m>n)$, by the following stepwise process:

    1. Set $v = e_ i$ (where $e_ i$ is the $i$th unit vector) and try to make it orthogonal to all column vectors of $Q_1$ and the already generated $Q_2$.

    2. If the subroutine is successful, append w to $Q_2$; otherwise, try $v = e_{i+1}$.

The $4 \times 3$ matrix $Q$ contains the unit vectors $e_1, e_3$, and $e_4$. The column vector v is pairwise linearly independent with the three columns of $Q$. As expected, the ORTVEC subroutine computes the vector w as the unit vector $e_2$ with $u = (1,1,1)$ and $\rho = 1$.

q = { 1  0  0,
      0  0  0,
      0  1  0,
      0  0  1 };
v = { 1, 1, 1, 1 };
call ortvec(w,u,rho,lindep,v,q);
print rho u w;

Figure 24.262: Matrix Orthogonalization

rho u w
1 1 0
  1 1
  1 0
    0


Stepwise QR Decomposition Example

You can perform the QR decomposition of the linearly independent columns of an $m \times n$ matrix $\mb {A}$ with the following statements:

a = {1  2  1,
     2  4  2,
     1  4 -1,
     1  0  3}; /* use any matrix A */
nind = 0;  ndep = 0;  dmax = 0.;
n = ncol(a);  m = nrow(a); ind = j(1,n,0);
free q;
do j = 1 to n;
   v = a[ ,j];
   call ortvec(w,u,rho,lindep,v,q);
   aro = abs(rho);
   if aro > dmax then dmax = aro;
   if aro <= 1.e-10 * dmax then lindep = 1;
   if lindep = 0 then do;
      nind = nind + 1;
      q = q || w;
      if nind = n then r = r || (u // rho);
      else r = r || (u // rho // j(n-nind,1,0.));
   end;
   else do;
      print "Column " j " is linearly dependent.";
      ndep = ndep + 1; ind[ndep] = j;
   end;
end;
print q r;

Figure 24.263: QR Decomposition of Independent Columns

q   r  
0.3779645 0 2.6457513 5.2915026
0.7559289 0 0 2.8284271
0.3779645 0.7071068 0 0
0.3779645 -0.707107    


Next, process the remaining (dependent) columns of $\mb {A}$:

do j = 1 to ndep;
   k = ind[ndep-j+1];
   v = a[ ,k];
   call ortvec(w,u,rho,lindep,v,q);
   if lindep = 0 then do;
      nind = nind + 1;
      q = q || w;
      if nind = n then r = r || (u // rho);
      else r = r || (u // rho // j(n-nind,1,0.));
   end;
end;
print q r;

Figure 24.264: QR Decomposition of Dependent Columns

q   r  
0.3779645 0 -0.239046 2.6457513 5.2915026 2.6457513
0.7559289 0 -0.478091 0 2.8284271 -2.828427
0.3779645 0.7071068 0.5976143 0 0 1.327E-16
0.3779645 -0.707107 0.5976143      


You can also use the ORTVEC subroutine to compute the null space in the last columns of $\mb {Q}$:

do i = 1 to m;
   if nind < m then do;
     v = j(m,1,0.); v[i] = 1.;
     call ortvec(w,u,rho,lindep,v,q);
     aro = abs(rho);
     if aro > dmax then dmax = aro;
     if aro <= 1.e-10 * dmax then lindep = 1;
     if lindep = 0 then do;
       nind = nind + 1;
       q = q || w;
     end;
     else print "Unit vector" i "linearly dependent.";
   end;
end;
if nind < m then do;
   print "This is theoretically not possible.";
end;
print q;

Figure 24.265: Final Orthogonal Matrix

q
0.3779645 0 -0.239046 0.8944272
0.7559289 0 -0.478091 -0.447214
0.3779645 0.7071068 0.5976143 0
0.3779645 -0.707107 0.5976143 -3.1E-17


In the example, if you define $\mb {Q}_2$ to be the last two columns of $\mb {Q}$, then $\mb {Q}_2^\prime A=\mb {0}$.