Next Page Previous Page Home Tools & Aids Search Handbook
6. Process or Product Monitoring and Control
6.5. Tutorials
6.5.3. Elements of Matrix Algebra

6.5.3.2.

Determinant and Eigenstructure

A matrix determinant is difficult to define but a very useful number Unfortunately, not every square matrix has an inverse (although most do). Associated with any square matrix is a single number that represents a unique function of the numbers in the matrix. This scalar function of a square matrix is called the determinant. The determinant of a matrix \({\bf A}\) is denoted by \(|{\bf A}|\). A formal definition for the deteterminant of a square matrix \({\bf A} = (a_{ij})\) is somewhat beyond the scope of this Handbook. Consult any good linear algebra textbook if you are interested in the mathematical details.
Singular matrix As is the case of inversion of a square matrix, calculation of the determinant is tedious and computer assistance is needed for practical calculations. If the determinant of the (square) matrix is exactly zero, the matrix is said to be singular and it has no inverse.
Determinant of variance-covariance matrix Of great interest in statistics is the determinant of a square symmetric matrix \({\bf D}\) whose diagonal elements are sample variances and whose off-diagonal elements are sample covariances. Symmetry means that the matrix and its transpose are identical (i.e., \({\bf A} = {\bf A}'\)). An example is $$ {\bf D} = \left[ \begin{array}{cccc} s_1^2 & s_1 s_2 r_{12} & \cdots & s_1 s_p r_{1p} \\ s_2 s_1 r_{21} & s_2^2 & \cdots & s_2 s_p r_{2p} \\ \vdots & \vdots & & \vdots \\ s_p s_1 r_{p1} & s_p s_2 r_{p2} & \cdots & s_p^2 \end{array} \right] \, . $$ where \(s_1\) and \(s_2\) are sample standard deviations and \(r_{ij}\) is the sample correlation.

\({\bf D}\) is the sample variance-covariance matrix for observations of a multivariate vector of \(p\) elements. The determinant of \({\bf D}\), in this case, is sometimes called the generalized variance.

Characteristic equation In addition to a determinant and possibly an inverse, every square matrix has associated with it a characteristic equation. The characteristic equation of a matrix is formed by subtracting some particular value, usually denoted by the greek letter \(\lambda\), (lambda), from each diagonal element of the matrix, such that the determinant of the resulting matrix is equal to zero. For example, the characteristic equation of a second order (\(2 \times 2\)) matrix \({\bf A}\) may be written as
Definition of the characteristic equation for \(2 \times 2\) matrix $$ |{\bf A}-\lambda {\bf I}| = \left| \begin{array}{cc} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda \end{array} \right| = 0 \, . $$
Eigenvalues of a matrix For a matrix of order \(p\) there may be as many as \(p\) different values for \(\lambda\) that will satisfy the equation. These different values are called the eigenvalues of the matrix.
Eigenvectors of a matrix Associated with each eigenvalue is a vector, \({\bf v}\), called the eigenvector. The eigenvector satisfies the equation $$ {\bf Av} = \lambda {\bf v} \, . $$
Eigenstructure of a matrix If the complete set of eigenvalues is arranged in the diagonal positions of a diagonal matrix \({\bf V}\), the following relationship holds. $$ {\bf AV} = {\bf VL} $$ This equation specifies the complete eigenstructure of \({\bf A}\). Eigenstructures and the associated theory figure heavily in multivariate procedures and the numerical evaluation of \({\bf L}\) and \({\bf V}\) is a central computing problem.
Home Tools & Aids Search Handbook Previous Page Next Page