Important Linear Algebra Concepts

A matrix $\bA $ is a rectangular array of numbers. The order of a matrix with n rows and k columns is $(n \times k)$. The element in row i, column j of $\bA $ is denoted as $a_{ij}$, and the notation $\left[a_{ij}\right]$ is sometimes used to refer to the two-dimensional row-column array

\[  \bA = \left[ \begin{array}{ccccc} a_{11} &  a_{12} &  a_{13} &  \cdots &  a_{1k} \cr a_{21} &  a_{22} &  a_{23} &  \cdots &  a_{2k} \cr a_{31} &  a_{32} &  a_{33} &  \cdots &  a_{3k} \cr \vdots &  \vdots &  \vdots &  \ddots &  \vdots \cr a_{n1} &  a_{n2} &  a_{n3} &  \cdots &  a_{nk} \end{array}\right] = \left[a_{ij}\right]  \]

A vector is a one-dimensional array of numbers. A column vector has a single column ($k = 1$). A row vector has a single row ($n = 1$). A scalar is a matrix of order $(1 \times 1)$—that is, a single number. A square matrix has the same row and column order, $n = k$. A diagonal matrix is a square matrix where all off-diagonal elements are zero, $a_{ij} = 0$ if $i \not= j$. The identity matrix $\bI $ is a diagonal matrix with $a_{ii} = 1$ for all i. The unit vector $\mb {1}$ is a vector where all elements are 1. The unit matrix $\bJ $ is a matrix of all 1s. Similarly, the elements of the null vector and the null matrix are all 0.

Basic matrix operations are as follows:

Addition

If $\bA $ and $\bB $ are of the same order, then $\bA + \bB $ is the matrix of elementwise sums,

\[  \bA + \mb {B} = \left[ a_{ij} + b_{ij} \right]  \]
Subtraction

If $\bA $ and $\bB $ are of the same order, then $\bA - \bB $ is the matrix of elementwise differences,

\[  \bA - \mb {B} = \left[ a_{ij} - b_{ij} \right]  \]
Dot product

The dot product of two n-vectors $\mb {a}$ and $\mb {b}$ is the sum of their elementwise products,

\[  \mb {a \cdot b} = \sum _{i=1}^ n a_ i b_ i  \]

The dot product is also known as the inner product of $\mb {a}$ and $\mb {b}$. Two vectors are said to be orthogonal if their dot product is zero.

Multiplication

Matrices $\bA $ and $\bB $ are said to be conformable for $\mb {AB}$ multiplication if the number of columns in $\bA $ equals the number of rows in $\mb {B}$. Suppose that $\bA $ is of order $(n \times k)$ and that $\mb {B}$ is of order $(k \times p)$. The product $\mb {AB}$ is then defined as the $(n \times p)$ matrix of the dot products of the ith row of $\bA $ and the jth column of $\bB $,

\[  \mb {AB} = \left[ \mb {a_ i \cdot b_ j} \right]_{n \times p}  \]
Transposition

The transpose of the $(n \times k)$ matrix $\bA $ is denoted as $\bA ^\prime $ and is obtained by interchanging the rows and columns,

\[  \bA ’ = \left[ \begin{array}{ccccc} a_{11} &  a_{21} &  a_{31} &  \cdots &  a_{n1} \cr a_{12} &  a_{22} &  a_{23} &  \cdots &  a_{n2} \cr a_{13} &  a_{23} &  a_{33} &  \cdots &  a_{n3} \cr \vdots &  \vdots &  \vdots &  \ddots &  \vdots \cr a_{1k} &  a_{2k} &  a_{3k} &  \cdots &  a_{nk} \end{array}\right] = \left[ a_{ji} \right]  \]

A symmetric matrix is equal to its transpose, $\bA =\bA ’$. The inner product of two $(n \times 1)$ column vectors $\mb {a}$ and $\mb {b}$ is $\mb {a \cdot b} = \mb {a}’\mb {b}$.

Matrix Inversion

Regular Inverses

The right inverse of a matrix $\bA $ is the matrix that yields the identity when $\bA $ is postmultiplied by it. Similarly, the left inverse of $\bA $ yields the identity if $\bA $ is premultiplied by it. $\bA $ is said to be invertible and $\bB $ is said to be the inverse of $\bA $, if $\bB $ is its right and left inverse, $\mb {BA} = \mb {AB} = \bI $. This requires $\bA $ to be square and nonsingular. The inverse of a matrix $\bA $ is commonly denoted as $\bA ^{-1}$. The following results are useful in manipulating inverse matrices (assuming both $\bA $ and $\bC $ are invertible):

$\displaystyle  \bA \bA ^{-1} = $
$\displaystyle  \, \, \bA ^{-1}\bA = \mb {I}  $
$\displaystyle \left(\bA ’\right)^{-1} = $
$\displaystyle  \left(\bA ^{-1}\right)^\prime  $
$\displaystyle \left(\bA ^{-1}\right)^{-1} = $
$\displaystyle  \, \, \bA  $
$\displaystyle \left(\mb {AC}\right)^{-1} = $
$\displaystyle  \, \, \bC ^{-1}\bA ^{-1}  $
$\displaystyle \mr {rank}(\bA ) = $
$\displaystyle  \, \, \mr {rank}\left(\bA ^{-1}\right)  $

If $\bD $ is a diagonal matrix with nonzero entries on the diagonal—that is, $\bD = \mr {diag}\left(d_1,\cdots ,d_ n\right)$—then $\bD ^{-1} = \mr {diag}\left(1/d_1,\cdots ,1/d_ n\right)$. If $\bD $ is a block-diagonal matrix whose blocks are invertible, then

\[  \mb {D} = \left[\begin{array}{lllll} \mb {D}_1 &  \mb {0} &  \mb {0} &  \cdots &  \mb {0} \cr \mb {0} &  \mb {D}_2 &  \mb {0} &  \cdots &  \mb {0} \cr \mb {0} &  \mb {0} &  \mb {D}_3 &  \cdots &  \mb {0} \cr \vdots &  \vdots &  \vdots &  \ddots &  \vdots \cr \mb {0} &  \mb {0} &  \mb {0} &  \cdots &  \mb {D}_ n \end{array}\right] \quad \quad \mb {D}^{-1} = \left[\begin{array}{lllll} \mb {D}^{-1}_1 &  \mb {0} &  \mb {0} &  \cdots &  \mb {0} \cr \mb {0} &  \mb {D}^{-1}_2 &  \mb {0} &  \cdots &  \mb {0} \cr \mb {0} &  \mb {0} &  \mb {D}^{-1}_3 &  \cdots &  \mb {0} \cr \vdots &  \vdots &  \vdots &  \ddots &  \vdots \cr \mb {0} &  \mb {0} &  \mb {0} &  \cdots &  \mb {D}^{-1}_ n \end{array}\right]  \]

In statistical applications the following two results are particularly important, because they can significantly reduce the computational burden in working with inverse matrices.

Partitioned Matrix

Suppose $\bA $ is a nonsingular matrix that is partitioned as

\[  \bA = \left[ \begin{array}{cc} \bA _{11} &  \bA _{12} \cr \bA _{21} &  \bA _{22} \end{array}\right]  \]

Then, provided that all the inverses exist, the inverse of $\bA $ is given by

\[  \bA ^{-1} = \left[ \begin{array}{cc} \mb {B}_{11} &  \mb {B}_{12} \cr \mb {B}_{21} &  \mb {B}_{22} \end{array}\right]  \]

where $\bB _{11} = \left(\bA _{11}-\bA _{12}\bA _{22}^{-1}\bA _{21}\right)^{-1}$, $\bB _{12} = -\bB _{11}\bA _{12}\bA _{22}^{-1}$, $\bB _{21} = -\bA _{22}^{-1}\bA _{21}\bB _{11}$, and $\bB _{22} = \left(\bA _{22}-\bA _{21}\bA _{11}^{-1}\bA _{12}\right)^{-1}$.

Patterned Sum

Suppose $\bR $ is $(n \times n)$ nonsingular, $\bG $ is $(k \times k)$ nonsingular, and $\bB $ and $\bC $ are $(n \times k)$ and $(k \times n)$ matrices, respectively. Then the inverse of $\bR +\mb {BGC}$ is given by

\[  \left(\bR +\mb {BGC}\right)^{-1} = \bR ^{-1} - \bR ^{-1}\bB \left(\bG ^{-1} + \mb {CR}^{-1}\bB \right)^{-1} \bC \bR ^{-1}  \]

This formula is particularly useful if $k << n$ and $\bR $ has a simple form that is easy to invert. This case arises, for example, in mixed models where $\bR $ might be a diagonal or block-diagonal matrix, and $\bB =\bC ^\prime $.

Another situation where this formula plays a critical role is in the computation of regression diagnostics, such as in determining the effect of removing an observation from the analysis. Suppose that $\bA = \bX ’\bX $ represents the crossproduct matrix in the linear model $\mr {E}[\bY ] = \bX \bbeta $. If $\mb {x}_{i}^\prime $ is the ith row of the $\bX $ matrix, then $(\bX ’\bX - \mb {x}_ i\mb {x}_ i^\prime )$ is the crossproduct matrix in the same model with the ith observation removed. Identifying $\bB = -\mb {x}_ i$, $\bC =\mb {x}_ i^\prime $, and $\bG =\bI $ in the preceding inversion formula, you can obtain the expression for the inverse of the crossproduct matrix:

\[  \left(\bX ’\bX - \mb {x}_ i\mb {x}_ i^\prime \right)^{-1} = \bX ’\bX + \frac{\left(\bX \bX \right)^{-1}\mb {x}_ i\mb {x}_ i^\prime \left(\bX \bX \right)^{-1}}{1-\mb {x}_ i\left(\bX \bX \right)^{-1}\mb {x}_ i}  \]

This expression for the inverse of the reduced data crossproduct matrix enables you to compute leave-one-out deletion diagnostics in linear models without refitting the model.

Generalized Inverse Matrices

If $\bA $ is rectangular (not square) or singular, then it is not invertible and the matrix $\bA ^{-1}$ does not exist. Suppose you want to find a solution to simultaneous linear equations of the form

\[  \mb {Ab} = \mb {c}  \]

If $\bA $ is square and nonsingular, then the unique solution is $\mb {b} = \bA ^{-1}\mb {c}$. In statistical applications, the case where $\bA $ is $(n \times k)$ rectangular is less important than the case where $\bA $ is a $(k \times k)$ square matrix of rank less than k. For example, the normal equations in ordinary least squares (OLS) estimation in the model $\bY = \bX \bbeta + \bepsilon $ are

\[  \left(\bX ’\bX \right)\bbeta = \bX ’\bY  \]

A generalized inverse matrix is a matrix $\bA ^-$ such that $\bA ^{-}\mb {c}$ is a solution to the linear system. In the OLS example, a solution can be found as $\left(\bX ’\bX \right)^{-}\bX ’\bY $, where $\left(\bX ’\bX \right)^{-}$ is a generalized inverse of $\bX ’\bX $.

The following four conditions are often associated with generalized inverses. For the square or rectangular matrix $\bA $ there exist matrices $\bG $ that satisfy

\[  \begin{array}{llcl} \mr {(i)} &  \mb {AGA} &  = &  \bA \cr \mr {(ii)} &  \mb {GAG} &  = &  \mb {G} \cr \mr {(iii)} &  (\mb {AG})’ &  = &  \mb {AG} \cr \mr {(iv)} &  (\mb {GA})’ &  = &  \mb {GA} \end{array}  \]

The matrix $\bG $ that satisfies all four conditions is unique and is called the Moore-Penrose inverse, after the first published work on generalized inverses by Moore (1920) and the subsequent definition by Penrose (1955). Only the first condition is required, however, to provide a solution to the linear system above.

Pringle and Rayner (1971) introduced a numbering system to distinguish between different types of generalized inverses. A matrix that satisfies only condition (i) is a $g_1$-inverse. The $g_2$-inverse satisfies conditions (i) and (ii). It is also called a reflexive generalized inverse. Matrices satisfying conditions (i)–(iii) or conditions (i), (ii), and (iv) are $g_3$-inverses. Note that a matrix that satisfies the first three conditions is a right generalized inverse, and a matrix that satisfies conditions (i), (ii), and (iv) is a left generalized inverse. For example, if $\bB $ is $(n \times k)$ of rank k, then $\left(\bB ’\bB \right)^{-1}\bB ’$ is a left generalized inverse of $\bB $. The notation $g_4$-inverse for the Moore-Penrose inverse, satisfying conditions (i)–(iv), is often used by extension, but note that Pringle and Rayner (1971) do not use it; rather, they call such a matrix the generalized inverse.

If the $(n \times k)$ matrix $\bX $ is rank-deficient—that is, $\mr {rank}(\bX ) < \min \{ n,k\} $—then the system of equations

\[  \left(\bX ’\bX \right)\bbeta = \bX ’\bY  \]

does not have a unique solution. A particular solution depends on the choice of the generalized inverse. However, some aspects of the statistical inference are invariant to the choice of the generalized inverse. If $\bG $ is a generalized inverse of $\bX ’\bX $, then $\bX \bG \bX ’$ is invariant to the choice of $\bG $. This result comes into play, for example, when you are computing predictions in an OLS model with a rank-deficient $\bX $ matrix, since it implies that the predicted values

\[  \bX \widehat{\bbeta } = \bX \left(\bX ’\bX \right)^{-}\bX ’\mb {y}  \]

are invariant to the choice of $\left(\bX ’\bX \right)^-$.

Matrix Differentiation

Taking the derivative of expressions involving matrices is a frequent task in statistical estimation. Objective functions that are to be minimized or maximized are usually written in terms of model matrices and/or vectors whose elements depend on the unknowns of the estimation problem. Suppose that $\bA $ and $\bB $ are real matrices whose elements depend on the scalar quantities $\beta $ and $\theta $—that is, $\bA = \left[ a_{ij}(\beta ,\theta )\right]$, and similarly for $\bB $.

The following are useful results in finding the derivative of elements of a matrix and of functions involving a matrix. For more in-depth discussion of matrix differentiation and matrix calculus, see, for example, Magnus and Neudecker (1999) and Harville (1997).

The derivative of $\bA $ with respect to $\beta $ is denoted $\dot{\bA }_\beta $ and is the matrix of the first derivatives of the elements of $\bA $:

\[  \dot{\bA }_\beta = \frac{\partial }{\partial \beta }\bA = \left[\frac{\partial a_{ij}(\beta ,\theta )}{\partial \beta }\right]  \]

Similarly, the second derivative of $\bA $ with respect to $\beta $ and $\theta $ is the matrix of the second derivatives

\[  \ddot{\bA }_{\beta \theta } = \frac{\partial ^2}{\partial \beta \partial \theta } \bA = \left[\frac{\partial ^2 a_{ij}(\beta ,\theta )}{\partial \beta \partial \theta }\right]  \]

The following are some basic results involving sums, products, and traces of matrices:

$\displaystyle  \frac{\partial }{\partial \beta } c_1\bA = $
$\displaystyle  \, \, c_1\dot{\bA }_\beta  $
$\displaystyle \frac{\partial }{\partial \beta } (\bA + \bB ) = $
$\displaystyle  \, \, \dot{\bA }_\beta + \dot{\bB }_\beta  $
$\displaystyle \frac{\partial }{\partial \beta } (c_1\bA + c_2\bB ) = $
$\displaystyle  \, \, c_1\dot{\bA }_\beta + c_2\dot{\bB }_\beta  $
$\displaystyle \frac{\partial }{\partial \beta } \bA \bB = $
$\displaystyle  \, \, \bA \dot{\bB }_\beta + \dot{\bA }_\beta \bB  $
$\displaystyle \frac{\partial }{\partial \beta } \mr {trace}(\bA )= $
$\displaystyle  \, \, \mr {trace}\left(\dot{\bA }_\beta \right)  $
$\displaystyle \frac{\partial }{\partial \beta } \mr {trace}(\mb {AB})= $
$\displaystyle  \, \, \mr {trace}\left(\bA \dot{\bB }_\beta \right) + \mr {trace}\left(\dot{\bA }_\beta \bB \right)  $

The next set of results is useful in finding the derivative of elements of $\bA $ and of functions of $\bA $, if $\bA $ is a nonsingular matrix:

$\displaystyle  \frac{\partial }{\partial \beta } \mb {x}’\bA ^{-1}\mb {x} = $
$\displaystyle  -\mb {x}’\bA ^{-1}\dot{\bA }_\beta \bA ^{-1}\mb {x}  $
$\displaystyle \frac{\partial }{\partial \beta }\bA ^{-1} = $
$\displaystyle  -\bA ^{-1}\dot{\bA }_\beta \bA ^{-1}  $
$\displaystyle \frac{\partial }{\partial \beta } |\bA | =  $
$\displaystyle  \, \, |\bA | \, \mr {trace}\left( \bA ^{-1}\dot{\bA }_\beta \right)  $
$\displaystyle \frac{\partial }{\partial \beta } \log \left\{ |\bA |\right\}  = $
$\displaystyle  \, \, \frac{1}{|\bA |}\,  \frac{\partial }{\partial \beta }\bA = \mr {trace}\left( \bA ^{-1}\dot{\bA }_\beta \right)  $
$\displaystyle \frac{\partial ^2}{\partial \beta \partial \theta } \bA ^{-1} = $
$\displaystyle  -\bA ^{-1}\ddot{\bA }_{\beta \theta }\bA ^{-1} + \bA ^{-1}\dot{\bA }_\beta \bA ^{-1}\dot{\bA }_\theta \bA ^{-1} + \bA ^{-1}\dot{\bA }_\theta \bA ^{-1}\dot{\bA }_\beta \bA ^{-1}  $
$\displaystyle \frac{\partial ^2}{\partial \beta \partial \theta } \log \left\{ |\bA |\right\}  = $
$\displaystyle  \, \, \mr {trace}\left(\bA ^{-1}\ddot{\bA }_{\beta \theta } \right) - \mr {trace}\left(\bA ^{-1}\dot{\bA }_\beta \bA ^{-1}\dot{\bA }_\theta \right)  $

Now suppose that $\mb {a}$ and $\mb {b}$ are column vectors that depend on $\beta $ and/or $\theta $ and that $\mb {x}$ is a vector of constants. The following results are useful for manipulating derivatives of linear and quadratic forms:

$\displaystyle  \frac{\partial }{\partial \mb {x}} \mb {a}’\mb {x} = $
$\displaystyle  \, \, \mb {a}  $
$\displaystyle  \frac{\partial }{\partial \mb {x}}\mb {Bx} = $
$\displaystyle  \, \, \bB  $
$\displaystyle  \frac{\partial }{\partial \mb {x}}\mb {x}’\bB \mb {x} = $
$\displaystyle  \left(\bB +\bB ’\right) \mb {x}  $
$\displaystyle  \frac{\partial ^2}{\partial \mb {x}\partial \mb {x}} \mb {x}’\bB \mb {x} = $
$\displaystyle  \, \, \bB + \bB ’  $

Matrix Decompositions

To decompose a matrix is to express it as a function—typically a product—of other matrices that have particular properties such as orthogonality, diagonality, triangularity. For example, the Cholesky decomposition of a symmetric positive definite matrix $\bA $ is $\bC \bC ’ = \bA $, where $\bC $ is a lower-triangular matrix. The spectral decomposition of a symmetric matrix is $\bA = \bP \bD \bP ’$, where $\bD $ is a diagonal matrix and $\bP $ is an orthogonal matrix.

Matrix decomposition play an important role in statistical theory as well as in statistical computations. Calculations in terms of decompositions can have greater numerical stability. Decompositions are often necessary to extract information about matrices, such as matrix rank, eigenvalues, or eigenvectors. Decompositions are also used to form special transformations of matrices, such as to form a square-root matrix. This section briefly mentions several decompositions that are particularly prevalent and important.

LDU, LU, and Cholesky Decomposition

Every square matrix $\bA $, whether it is positive definite or not, can be expressed in the form $\bA = \bL \bD \bU $, where $\bL $ is a unit lower-triangular matrix, $\bD $ is a diagonal matrix, and $\bU $ is a unit upper-triangular matrix. (The diagonal elements of a unit triangular matrix are 1.) Because of the arrangement of the matrices, the decomposition is called the LDU decomposition. Since you can absorb the diagonal matrix into the triangular matrices, the decomposition

\[  \bA = \bL \bD ^{1/2} \bD ^{1/2}\bU = \bL ^*\bU ^*  \]

is also referred to as the LU decomposition of $\bA $.

If the matrix $\bA $ is positive definite, then the diagonal elements of $\bD $ are positive and the LDU decomposition is unique. Furthermore, we can add more specificity to this result in that for a symmetric, positive definite matrix, there is a unique decomposition $\bA = \bU ’\bD \bU $, where $\bU $ is unit upper-triangular and $\bD $ is diagonal with positive elements. Absorbing the square root of $\bD $ into $\bU $, $\bC = \bD ^{1/2}\bU $, the decomposition is known as the Cholesky decomposition of a positive-definite matrix:

\[  \bB = \bU ’\bD ^{1/2} \,  \bD ^{1/2}\bU = \bC ’\bC  \]

where $\bC $ is upper triangular.

If $\bB $ is $(n \times n)$ symmetric nonnegative definite of rank k, then we can extend the Cholesky decomposition as follows. Let $\bC ^*$ denote the lower-triangular matrix such that

\[  \bC ^* = \left[ \begin{array}{cc} \bC _{k\times k} &  \mb {0} \cr \mb {0} &  \mb {0} \end{array}\right]  \]

Then $\bB = \bC \bC ’$.

Spectral Decomposition

Suppose that $\bA $ is an $(n \times n)$ symmetric matrix. Then there exists an orthogonal matrix $\bQ $ and a diagonal matrix $\bD $ such that $\bA = \bQ \bD \bQ ’$. Of particular importance is the case where the orthogonal matrix is also orthonormal—that is, its column vectors have unit norm. Denote this orthonormal matrix as $\bP $. Then the corresponding diagonal matrix—$\bLambda = \mr {diag}(\lambda _ i,\cdots ,\lambda _ n)$, say—contains the eigenvalues of $\bA $. The spectral decomposition of $\bA $ can be written as

\[  \bA = \bP \bLambda \bP ’ = \sum _{i=1}^ n\lambda _ i \mb {p}_ i\mb {p}_ i’  \]

where $\mb {p}_ i$ denotes the ith column vector of $\bP $. The right-side expression decomposes $\bA $ into a sum of rank-1 matrices, and the weight of each contribution is equal to the eigenvalue associated with the ith eigenvector. The sum furthermore emphasizes that the rank of $\bA $ is equal to the number of nonzero eigenvalues.

Harville (1997, p. 538) refers to the spectral decomposition of $\bA $ as the decomposition that takes the previous sum one step further and accumulates contributions associated with the distinct eigenvalues. If $\lambda _ i^*,\cdots ,\lambda _ k^*$ are the distinct eigenvalues and $\bE _ j = \sum \mb {p}_ i\mb {p}_ i’$, where the sum is taken over the set of columns for which $\lambda _ i = \lambda _ j^*$, then

\[  \bA = \sum _{i=1}^{k} \lambda _ j^*\bE _ j  \]

You can employ the spectral decomposition of a nonnegative definite symmetric matrix to form a square-root matrix of $\bA $. Suppose that $\bLambda ^{1/2}$ is the diagonal matrix containing the square roots of the $\lambda _ i$. Then $\bB = \bP \bLambda ^{1/2}\bP ’$ is a square-root matrix of $\bA $ in the sense that $\bB \bB = \bA $, because

\[  \bB \bB = \bP \bLambda ^{1/2}\bP ’\bP \bLambda ^{1/2}\bP ’ = \bP \bLambda ^{1/2}\bLambda ^{1/2}\bP ’ = \bP \bLambda \bP ’  \]

Generating the Moore-Penrose inverse of a matrix based on the spectral decomposition is also simple. Denote as $\bDelta $ the diagonal matrix with typical element

\[  \delta _ i = \left\{  \begin{array}{ll} 1/\lambda _ i &  \lambda _ i \not= 0 \cr 0 &  \lambda _ i = 0 \end{array} \right.  \]

Then the matrix $\bP \bDelta \bP ’ = \sum \delta _ i\mb {p}_ i\mb {p}_ i’$ is the Moore-Penrose ($g_4$-generalized) inverse of $\bA $.

Singular-Value Decomposition

The singular-value decomposition is related to the spectral decomposition of a matrix, but it is more general. The singular-value decomposition can be applied to any matrix. Let $\bB $ be an $(n \times p)$ matrix of rank k. Then there exist orthogonal matrices P and Q of order $(n \times n)$ and $(p \times p)$, respectively, and a diagonal matrix $\bD $ such that

\[  \bP ’\bB \bQ = \bD = \left[ \begin{array}{cc} \bD _1 &  \mb {0} \cr \mb {0} &  \mb {0} \end{array}\right]  \]

where $\bD _1$ is a diagonal matrix of order k. The diagonal elements of $\bD _1$ are strictly positive. As with the spectral decomposition, this result can be written as a decomposition of $\bB $ into a weighted sum of rank-1 matrices

\[  \bB =- \bP \bD \bQ ’ = \sum _{i=1}^ n d_ i \mb {p}_ i\mb {q}_ i’  \]

The scalars $d_1,\cdots ,d_ k$ are called the singular values of the matrix $\bB $. They are the positive square roots of the nonzero eigenvalues of the matrix $\bB ’\bB $. If the singular-value decomposition is applied to a symmetric, nonnegative definite matrix $\bA $, then the singular values $d_1,\cdots ,d_ n$ are the nonzero eigenvalues of $\bA $ and the singular-value decomposition is the same as the spectral decomposition.

As with the spectral decomposition, you can use the results of the singular-value decomposition to generate the Moore-Penrose inverse of a matrix. If $\bB $ is $(n \times p)$ with singular-value decomposition $\bP \bD \bQ ’$, and if $\bDelta $ is a diagonal matrix with typical element

\[  \delta _ i = \left\{  \begin{array}{ll} 1/d_ i &  |d_ i| \not= 0 \cr 0 &  d_ i = 0 \end{array} \right.  \]

then $\bQ \bDelta \bP ’$ is the $g_4$-generalized inverse of $\bB $.