2.6 Matrices
Matrices are mathematical entities consisting of specifically ordered groups of numbers. An example of a matrix is shown in
expression 2.262.
| (2.262) |
Matrix 2.262 is considered a 3 x 3 size matrix because it has 3 rows and 3 columns.
Please note that when specifying the size of matrix, the number of rows always comes first
( rows x
columns). This convention extends to the subscripts of our inner-matrix terms; at location row 2 column 1 we have
.
A number of specific operations can be performed on matrices. The ability to perform said operations typically depends on the size of the given matrix (or matrices). We will cover the most common matrix operations in the following sections.
Matrix Addition
Matrices can be added as shown in equation 2.263.
| (2.263) |
Only matrices of the same size can be added.
Scalar Multiplication
A matrix can be multiplied by a scalar
as shown in equation 2.264.
| (2.264) |
There are no limitations on the size of matrices that be multiplied by a scalar.
Matrix Multiplication
Matrix multiplications derives its logic from coordinate system transformations. However, since this is not a mathematics
book, we will not go into covering the motivation for matrix multiplication convention. Rather, simply the methodology will
be presented.
Unlike scalar multiplication, the order in which two matrices are multiplied matters. The output matrix is
calculated by taking the dot product of the columns in the first matrix with the rows in the second matrix. As
can be deduced from the previous sentence, this means that the number of columns in the first matrix must
match the number of rows in the second matrix. Generalized matrix multiplication is shown in equation 2.265.
| (2.265) |
As can be observed, we multiplied
x matrix
with
x
matrix
. Given that
must
equal ,
our result is a
x
matrix.
Transposition
The transpose of a matrix is accomplished by switching the rows and columns of a matrix as shown in equation 2.266.
| (2.266) |
As can be deduced, any size matrix can be transposed.
Some properties associated with transposed matrices are provided in equations 2.267, 2.268, 2.269, and 2.270.
| (2.267) |
| (2.268) |
| (2.269) |
| (2.270) |
Determinant
A determinant of a matrix is a number that is calculated from a matrix. The determinant of a 2 x 2 matrix is calculated as
shown in equation 2.271.
| (2.271) |
The determinant of a 3 x 3 matrix is calculated as shown in equation 2.272.
| (2.272) |
It is possible to find the determinant of larger square matrices. However, the methodology for this is not covered in this book.
As will be reviewed in the inverse section, a matrix can only be inverted if it has a nonzero determinant.
Cofactor Matrix
The cofactor matrix is a new matrix that can be calculated from a given square matrix. Within this section, we will review
how to calculate a cofactor matrix only for 3 x 3 matrix. A sample 3 x 3 matrix is provided in equation 2.273.
| (2.273) |
The co-factor matrix of matrix 2.273 is symbolized by matrix 2.274.
| (2.274) |
Each corresponding
term is calculated by finding the determinant of the 4 other matrix terms not included in the target term row or column. The following
equations show this for all
values in matrix 2.274.
Please note that some of co-factor matrix terms have a selectively placed coefficient negative.
Inverse
The inverse of example matrix
is referred to as .
Its definition is shown in equation 2.275.
| (2.275) |
Not all matrices are invertable. The matrix to the right hand side in equation 2.275 is called the identity matrix, and within this book will be
symbolized as . The identity
matrix is always square ()
and has a value of 1 in the diagonal indices, along with 0 in all other indices. As can be deduced, for a matrix
to have an inverse it must be square. There are two methods to finding the inverse of a matrix; the Gauss
Jordan Elimination method and the adjoint method. They will be presented using an example 3 x 3 matrix.
The Gauss Jordan Elimination method involves starting with the set up shown in expression 2.276.
| (2.276) |
The subsequent rows are multiplied by scalars and added to each other with the
goal of making the left hand side of expression 2.276 become the identity matrix
. When this occurs, the right hand
side of expression 2.276 will be .
The adjoint method involves calculating the inverse of a matrix using equation 2.277.
| (2.277) |
In equation 2.277,
is the cofactor matrix (the method of calculating this matrix was explained previously) of original matrix
.
The method of finding the cofactor matrix was within this book only presented for a 3 x 3 matrix. For a 2 x 2 matrix, the
equation 2.277 can be rewritten as shown in equation 2.278.
| (2.278) |
2.6.1 Eigenvalues and Eigenvectors
A vector (section 2.3) of
form can be expressed as a matrix as shown in equation 2.279.
| (2.279) |
Essentially, vectors can be thought of as column matrices with dimensions of
(, 1) where
is the number of basis vectors
in the given vector space (
should not be confused with ,
which is simply a basis vector as specified in section 2.3). If we multiply an input vector
(in matrix form) of
dimension (, 1) by a
matrix of dimension
we will get an
output vector of
dimension (,
1). This is shown in equation 2.280.
| (2.280) |
For this reason, matrices can be thought of operating onto vectors, and are commonly referred to as operators. In most cases, the input vector
will be linearly independent
of the output vector
for a given matrix as shown in Figure 2.10a. However, depending on the matrix a specific
can output a
which is a scalar
multiple of (i.e.
is proportional to and acts in the same
direction as ). This is shown in Figure
2.10b. When this happens, the is
referred to as a eigenvector. The ratio
is referred to as a eigenvalue.
Our task is to find eigenvectors and their corresponding eigenvalues for a given matrix
. This
problem is shown mathematically in equation 2.281.
| (2.281) |
In equation 2.281, is the
eigenvector, and is the
eigenvalue. Subtracting
from both sides, we get equation 2.282.
| (2.282) |
However, following the rules of section 2.6.0.0, the dimensions of
and
do not match, meaning that we cannot add them. Luckily for us, we can make
have matching dimensions by
multiplying by the identity matrix
(explained in section 2.6.0.0).
| (2.283) |
Factoring out ,
we get equation 2.284.
| (2.284) |
Multiplying both sides by ,
we get equation 2.285.
| (2.285) |
From the zero product property, we know that either
or must equal 0. From
section 2.6.0.0 we know that .
So, this would force .
However, we are looking for non-zero eigenvectors! So, we can state that we are interested in eigenvectors where
, which means we are
looking for solutions where
is not invertable. As reviewed in section 2.6.0.0 a matrix is not invertable if its determinant (section 2.6.0.0) is equal to 0. Therefore, we can
solve for eigenvalues
using equation 2.286.
| (2.286) |
It is possible to get anywhere between and
0 eigenvalues for a matrix of dimensions ().
The eigenvector can be found by solving equation 2.287 for each eigenvalue
.
| (2.287) |
In equation 2.287, the
subscript in and
indicates that the
eigenvalue corresponds
to eigenvalue .
The eigenvalues for a matrix can be expressed as a diagonal form as shown in equation 2.288.
| (2.288) |
The eigenvectors for a given matrix can be expressed in matrix form by stacking the eigenvectors horizontally as shown in
equation 2.289.
| (2.289) |
An interesting property of eigenvalues and eigenvectors lies in the fact that original matrix
multiplied by
its is equal to
multiplied by
. This is
shown in equation 2.290.
| (2.290) |
We can multiply both sides of equation 2.290 by
to obtain equation 2.291.
| (2.291) |
Expressing in
the form shown in equation 2.291 is referred to as diagonalizing the matrix.
The concept of eigenvalues and eigenvectors is critical to understanding the engineering concepts of principle moments of
inertia (section ??) and principle stress directions (section ??).