Language Reference

RZLIND Call

computes rank deficient linear least squares solutions, complete orthogonal factorization, and Moore-Penrose inverses

CALL RZLIND( lindep, rup, bup, r\lt, sing<, b>);

The RZLIND subroutine returns the following values:


lindep
is a scalar giving the number of linear dependencies that are recognized in {{r}} (number of zeroed rows in rup[n,n]).
rup
is the updated n x n upper triangular matrix {{r}} containing zero rows corresponding to zero recognized diagonal elements in the original {{r}}.
bup
is the n x p matrix {b} of right-hand sides that is updated simultaneously with {{r}}. If b is not specified, bup is not accessible.

The inputs to the RZLIND subroutine are as follows:



r
specifies the n x n upper triangular matrix {{r}}. Only the upper triangle of r is used; the lower triangle can contain any information.
sing
is an optional scalar specifying a relative singularity criterion for the diagonal elements of {{r}}. The diagonal element r_{ii} is considered zero if r_{ii}\leq sing\vert{r}_i\vert, where \vert{r}_i\vert is the Euclidean norm of column {r}_i of {{r}}. If the value provided for sing is not positive, the default value sing = 1000\epsilon is used, where \epsilon is the relative machine precision.
b
specifies the optional n x p matrix {b} of right-hand sides that have to be updated or downdated simultaneously with {{r}}.

The singularity test used in the RZLIND subroutine is a relative test using the Euclidean norms of the columns {r}_i of {{r}}. The diagonal element r_{ii} is considered as nearly zero (and the ith row is zeroed out) if the following test is true:
r_{ii} \leq {sing} \vert {r}_i \vert,    { where } \vert {r}_i \vert = \sqrt{{r}_i^' {r}_i}
Providing an argument sing \leq 0 is the same as omitting the argument sing in the RZLIND call. In this case, the default is sing = 1000\epsilon, where \epsilon is the relative machine precision. If {{r}} is computed by the QR decomposition {a}= {q}{{r}}, then the Euclidean norm of column i of {{r}} is the same (except for rounding errors) as the Euclidean norm of column i of {a}.

Consider the following possible application of the RZLIND subroutine. Assume that you want to compute the upper triangular Cholesky factor {{r}} of the n x n positive semidefinite matrix {a}^' {a},
{a}^' {a}= {{r}}^' {{r}}  { where } {a}\in {\cal r}^{m x n},  {\rm rank}({a})=r,  r \leq n \leq m
The Cholesky factor {{r}} of a positive definite matrix {a}^' {a} is unique (with the exception of the sign of its rows). However, the Cholesky factor of a positive semidefinite (singular) matrix {a}^' {a} can have many different forms.

In the following example, {a} is a 12 x 8 matrix with linearly dependent columns {a}_1 = {a}_2 + {a}_3 + {a}_4 and  {a}_1 = {a}_5 + {a}_6 + {a}_7 with r=6, n=8, and m=12.
  
    a = {1 1 0 0 1 0 0, 
         1 1 0 0 1 0 0, 
         1 1 0 0 0 1 0, 
         1 1 0 0 0 0 1, 
         1 0 1 0 1 0 0, 
         1 0 1 0 0 1 0, 
         1 0 1 0 0 1 0, 
         1 0 1 0 0 0 1, 
         1 0 0 1 1 0 0, 
         1 0 0 1 0 1 0, 
         1 0 0 1 0 0 1, 
         1 0 0 1 0 0 1}; 
    a = a || uniform(j(12,1,1)); 
    aa = a` * a; 
    m = nrow(a); n = ncol(a);
 
Applying the ROOT function to the coefficient matrix {a}^' {a} of the normal equations generates an upper triangular matrix {{r}}_1 where linearly dependent rows are zeroed out. You can verify that {a}^' {a}= {{r}}^'_1 {{r}}_1. Here is the code:

  
    r1 = root(aa); 
    ss1 = ssq(aa - r1` * r1); 
    print ss1 r1 [format=best6.];
 


Applying the QR subroutine with column pivoting on the original matrix {a} yields a different result, but you can also verify {a}^' {a}= {{r}}^'_2 {{r}}_2 after pivoting the rows and columns of {a}^' {a}. Here is the code:

  
    ord = j(n,1,0); 
    call qr(q,r2,pivqr,lindqr,a,ord); 
    ss2 = ssq(aa[pivqr,pivqr] - r2` * r2); 
    print ss2 r2 [format=best6.];
 


Using the RUPDT subroutine for stepwise updating of {{r}} by the m rows of {a} finally results in an upper triangular matrix {{r}}_3 with n-r nearly zero diagonal elements. However, other elements in rows with nearly zero diagonal elements can have significant values. The following statements verify that {a}^' {a}= {{r}}^'_3 {{r}}_3:
  
    r3 = shape(0,n,n); 
    call rupdt(rup,bup,sup,r3,a); 
    r3 = rup; 
    ss3 = ssq(aa - r3` * r3); 
    print ss3 r3 [format=best6.];
 




The result {{r}}_3 of the RUPDT subroutine can be transformed into the result {{r}}_1 of the ROOT function by left applications of Givens rotations to zero out the remaining significant elements of rows with small diagonal elements. Applying the RZLIND subroutine on the upper triangular result {{r}}_3 of the RUPDT subroutine generates a Cholesky factor {{r}}_4 with zero rows corresponding to diagonal elements that are small, giving the same result as the ROOT function (except for the sign of rows) if its singularity criterion recognizes the same linear dependencies. Here is the code:
  
    call rzlind(lind,r4,bup,r3); 
    ss4 = ssq(aa - r4` * r4); 
    print ss4 r4 [format=best6.];
 




Consider the rank-deficient linear least squares problem:
\min_{{x}} \vert {a}{x}- {b}\vert^2   { where } {a}\in {\cal r}^{m x n},  {\rm rank}({a})=r,  r \leq n \leq m
For r=n, the optimal solution, \hat{{x}}, is unique; however, for r \lt n, the rank-deficient linear least squares problem has many optimal solutions, each of which has the same least squares residual sum of squares:
{ss} = ({a}\hat{{x}} - {b})^'({a}\hat{{x}} - {b})
The solution of the full-rank problem, r=n, is illustrated in the QR call. The following list shows several solutions to the singular problem. The following example uses the 12 x 8 matrix from the preceding section and generates a new column vector {b}. The vector {b} and the matrix {a} are shown in the output.
  
    b = uniform(j(12,1,1)); 
    ab = a` * b; 
    print b a [format=best6.];
 


Each entry in the following list solves the rank-deficient linear least squares problem. Note that while each method minimizes the residual sum of squares, not all of the given solutions are of minimum Euclidean length.



You can use various methods to compute the Moore-Penrose inverse {a}^- of a rectangular matrix {a} using orthogonal methods. The entries in the following list find the Moore-Penrose inverse of the matrix {a} shown on this page.

  • Use complete QR decomposition with RUPDT and LUPDT:
      
              r3 = shape(0,n,n); 
              y = i(m); qtb = shape(0,n,m); 
              call rupdt(rup,bup,sup,r3,a,qtb,y); 
              r3 = rup; qtb = bup; 
              call rzlind(lind,r4,bup,r3,,qtb); 
              nr = n - lind; qtb = bup; 
              r = r4[piv[1:nr],piv[1:nr]]`; 
              z = r4[piv[1:nr],piv[nr+1:n]]`; 
              call lupdt(lup,bup,sup,r,z); 
              rd = trisolv(3,lup,r4[piv[1:nr],]); 
              rd = trisolv(4,lup,rd); 
              ga = shape(0,n,m); 
              ga = rd` * qtb[piv[1:nr],]; 
              t1 = a * ga; t2 = t1`; 
              t3 = ga * a; t4 = t3`; 
              ss5 = ssq(t1 - t2) + ssq(t3 - t4) + 
                    ssq(t1 * a - a) + ssq(t3 * ga - ga); 
              print ss5;
     

  • Previous Page | Next Page | Top of Page