The previous section introduced vectors and linear combinations and demonstrated how they provide a way to think about linear systems geometrically. In particular, we saw that the vector \(\bvec\) is a linear combination of the vectors \(\vvec_1,\vvec_2,\ldots,\vvec_n\) precisely when the linear system corresponding to the augmented matrix
\begin{equation*}
\left[\begin{array}{rrrrr}
\vvec_1 \amp \vvec_2 \amp \cdots \amp \vvec_n \amp \bvec
\end{array}\right]
\end{equation*}
is consistent.
Our goal in this section is to introduce matrix multiplication, another algebraic operation that deepens the connection between linear systems and linear combinations.
Subsection 2.2.1 Scalar multiplication and addition of matrices
We first thought of a matrix as a rectangular array of numbers. If we say that the shape of a matrix is \(m\times
n\text{,}\) we mean that it has \(m\) rows and \(n\) columns. For instance, the shape of the matrix below is \(3\times4\text{:}\)
\begin{equation*}
\left[
\begin{array}{rrrr}
0 \amp 4 \amp 3 \amp 1 \\
3 \amp 1 \amp 2 \amp 0 \\
2 \amp 0 \amp 1 \amp 1 \\
\end{array}
\right]\text{.}
\end{equation*}
We may also think of the columns of a matrix as a set of vectors. For instance, the matrix above may be represented as
\begin{equation*}
\left[
\begin{array}{rrrr}
\vvec_1 \amp \vvec_2 \amp \vvec_3 \amp \vvec_4
\end{array}
\right]
\end{equation*}
where
\begin{equation*}
\vvec_1=\left[\begin{array}{r}0\\3\\2\\ \end{array}\right],
\vvec_2=\left[\begin{array}{r}4\\1\\0\\ \end{array}\right],
\vvec_3=\left[\begin{array}{r}3\\2\\1\\ \end{array}\right],
\vvec_4=\left[\begin{array}{r}1\\0\\1\\ \end{array}\right]\text{.}
\end{equation*}
In this way, we see that the \(3\times 4\) matrix is equivalent to an ordered set of 4 vectors in \(\real^3\text{.}\)
This means that we may define scalar multiplication and matrix addition operations using the corresponding columnwise vector operations. For instance,
\begin{equation*}
\begin{aligned}
c\left[\begin{array}{rrrr}
\vvec_1 \amp \vvec_2 \amp \cdots \amp \vvec_n
\end{array}
\right]
{}={} \amp
\left[\begin{array}{rrrr}
c\vvec_1 \amp c\vvec_2 \amp \cdots \amp c\vvec_n
\end{array}
\right] \\
\left[\begin{array}{rrrr}
\vvec_1 \amp \vvec_2 \amp \cdots \amp \vvec_n
\end{array}
\right]
{}+{} \amp
\left[\begin{array}{rrrr}
\wvec_1 \amp \wvec_2 \amp \cdots \amp \wvec_n
\end{array}
\right]
\\
{}={} \amp
\left[\begin{array}{rrrr}
\vvec_1+\wvec_1 \amp \vvec_2+\wvec_2 \amp \cdots \amp
\vvec_n+\wvec_n
\end{array}
\right]. \\
\end{aligned}
\end{equation*}
Preview Activity 2.2.1. Matrix operations.
Compute the scalar multiple
\begin{equation*}
3\left[
\begin{array}{rrr}
3 \amp 1 \amp 0 \\
4 \amp 3 \amp 1 \\
\end{array}
\right]\text{.}
\end{equation*}
Find the sum
\begin{equation*}
\left[
\begin{array}{rr}
0 \amp 3 \\
1 \amp 2 \\
3 \amp 4 \\
\end{array}
\right]
+
\left[
\begin{array}{rrr}
4 \amp 1 \\
2 \amp 2 \\
1 \amp 1 \\
\end{array}
\right]\text{.}
\end{equation*}
Suppose that \(A\) and \(B\) are two matrices. What do we need to know about their shapes before we can form the sum \(A+B\text{?}\)
The matrix \(I_n\text{,}\) which we call the identity matrix, is the \(n\times n\) matrix whose entries are zero except for the diagonal entries, all of which are 1. For instance,
\begin{equation*}
I_3 =
\left[
\begin{array}{rrr}
1 \amp 0 \amp 0 \\
0 \amp 1 \amp 0 \\
0 \amp 0 \amp 1 \\
\end{array}
\right]\text{.}
\end{equation*}
If we can form the sum \(A+I_n\text{,}\) what must be true about the matrix \(A\text{?}\)
Find the matrix \(A  2I_3\) where
\begin{equation*}
A =
\left[
\begin{array}{rrr}
1 \amp 2 \amp 2 \\
2 \amp 3 \amp 3 \\
2 \amp 3 \amp 4 \\
\end{array}
\right]\text{.}
\end{equation*}
As this preview activity shows, the operations of scalar multiplication and addition of matrices are natural extensions of their vector counterparts. Some care, however, is required when adding matrices. Since we need the same number of vectors to add and since those vectors must be of the same dimension, two matrices must have the same shape if we wish to form their sum.
Subsection 2.2.2 Matrixvector multiplication and linear combinations
A more important operation will be matrix multiplication as it allows us to compactly express linear systems. We now introduce the product of a matrix and a vector with an example.
Example 2.2.1. Matrixvector multiplication.
Suppose we have the matrix \(A\) and vector \(\xvec\text{:}\)
\begin{equation*}
A = \left[\begin{array}{rr}
2 \amp 3 \\
0 \amp 2 \\
3 \amp 1 \\
\end{array}\right],~~~
\xvec = \left[\begin{array}{r}
2 \\ 3 \\ \end{array}\right]\text{.}
\end{equation*}
Their product will be defined to be the linear combination of the columns of \(A\) using the components of \(\xvec\) as weights. This means that
\begin{equation*}
\begin{aligned}
A\xvec =
\left[\begin{array}{rr}
2 \amp 3 \\
0 \amp 2 \\
3 \amp 1 \\
\end{array}\right]
\left[\begin{array}{r} 2 \\ 3 \\ \end{array}\right]
{}={} \amp
2 \left[\begin{array}{r} 2 \\ 0 \\ 3 \\ \end{array}\right] +
3 \left[\begin{array}{r} 3 \\ 2 \\ 1 \\ \end{array}\right]
\\ \\
{}={} \amp
\left[\begin{array}{r} 4 \\ 0 \\ 6 \\ \end{array}\right] +
\left[\begin{array}{r} 9 \\ 6 \\ 3 \\ \end{array}\right] \\ \\
{}={} \amp
\left[\begin{array}{r} 5 \\ 6 \\ 9 \\ \end{array}\right]. \\
\end{aligned}
\end{equation*}
Because \(A\) has two columns, we need two weights to form a linear combination of those columns, which means that \(\xvec\) must have two components. In other words, the number of columns of \(A\) must equal the dimension of the vector \(\xvec\text{.}\)
Similarly, the columns of \(A\) are 3dimensional so any linear combination of them is 3dimensional as well. Therefore, \(A\xvec\) will be 3dimensional.
We then see that if \(A\) is a \(3\times2\) matrix, \(\xvec\) must be a 2dimensional vector and \(A\xvec\) will be 3dimensional.
More generally, we have the following definition.
Definition 2.2.2. Matrixvector multiplication.
The product of a matrix \(A\) by a vector \(\xvec\) will be the linear combination of the columns of \(A\) using the components of \(\xvec\) as weights. More specifically, if
\begin{equation*}
A=\left[\begin{array}{rrrr}
\vvec_1 \amp \vvec_2 \amp \ldots \amp \vvec_n
\end{array}\right],~~~
\xvec = \left[\begin{array}{r}
c_1 \\ c_2 \\ \vdots \\ c_n \end{array}\right],
\end{equation*}
then
\begin{equation*}
A\xvec = c_1\vvec_1 + c_2\vvec_2 + \ldots + c_n\vvec_n\text{.}
\end{equation*}
If \(A\) is an \(m\times n\) matrix, then \(\xvec\) must be an \(n\)dimensional vector, and the product \(A\xvec\) will be an \(m\)dimensional vector.
The next activity explores some properties of matrix multiplication.
Activity 2.2.2. Matrixvector multiplication.
Find the matrix product
\begin{equation*}
\left[
\begin{array}{rrrr}
1 \amp 2 \amp 0 \amp 1 \\
2 \amp 4 \amp 3 \amp 2 \\
1 \amp 2 \amp 6 \amp 1 \\
\end{array}
\right]
\left[
\begin{array}{r}
3 \\ 1 \\ 1 \\ 1 \\
\end{array}
\right]\text{.}
\end{equation*}
Suppose that \(A\) is the matrix
\begin{equation*}
\left[
\begin{array}{rrr}
3 \amp 1 \amp 0 \\
0 \amp 2 \amp 4 \\
2 \amp 1 \amp 5 \\
1 \amp 0 \amp 3 \\
\end{array}
\right]\text{.}
\end{equation*}
If \(A\xvec\) is defined, what is the dimension of the vector \(\xvec\) and what is the dimension of \(A\xvec\text{?}\)
A vector whose entries are all zero is denoted by \(\zerovec\text{.}\) If \(A\) is a matrix, what is the product \(A\zerovec\text{?}\)
Suppose that \(I = \left[\begin{array}{rrr}
1 \amp 0 \amp 0 \\
0 \amp 1 \amp 0 \\
0 \amp 0 \amp 1 \\
\end{array}\right]\) is the identity matrix and \(\xvec=\threevec{x_1}{x_2}{x_3}\text{.}\) Find the product \(I\xvec\) and explain why \(I\) is called the identity matrix.
Suppose we write the matrix \(A\) in terms of its columns as
\begin{equation*}
A = \left[
\begin{array}{rrrr}
\vvec_1 \amp \vvec_2 \amp \cdots \amp \vvec_n \\
\end{array}
\right]\text{.}
\end{equation*}
If the vector \(\evec_1 = \left[\begin{array}{c} 1 \\ 0 \\
\vdots \\ 0 \end{array}\right]\text{,}\) what is the product \(A\evec_1\text{?}\)
Suppose that
\begin{equation*}
A = \left[
\begin{array}{rrrr}
1 \amp 2 \\
1 \amp 1 \\
\end{array}
\right],
\bvec = \left[
\begin{array}{r}
6 \\ 0
\end{array}
\right]\text{.}
\end{equation*}
Is there a vector \(\xvec\) such that \(A\xvec = \bvec\text{?}\)
Multiplication of a matrix \(A\) and a vector is defined as a linear combination of the columns of \(A\text{.}\) However, there is a shortcut for computing such a product. Let’s look at our previous example and focus on the first row of the product.
\begin{equation*}
\left[\begin{array}{rr}
2 \amp 3 \\
0 \amp 2 \\
3 \amp 1 \\
\end{array}\right]
\left[\begin{array}{r} 2 \\ 3 \\ \end{array}\right]
=
2 \left[\begin{array}{r} 2 \\ * \\ * \\ \end{array}\right] +
3 \left[\begin{array}{r} 3 \\ * \\ * \\ \end{array}\right]
=
\left[\begin{array}{c} 2(2)+3(3) \\ * \\ * \\ \end{array}\right]
=
\left[\begin{array}{r} 5 \\ * \\ * \\ \end{array}\right]\text{.}
\end{equation*}
To find the first component of the product, we consider the first row of the matrix. We then multiply the first entry in that row by the first component of the vector, the second entry by the second component of the vector, and so on, and add the results. In this way, we see that the third component of the product would be obtained from the third row of the matrix by computing \(2(3) + 3(1) = 9\text{.}\)
You are encouraged to evaluate the product
Item a of the previous activity using this shortcut and compare the result to what you found while completing that activity.
Activity 2.2.3.
Sage can find the product of a matrix and vector using the
*
operator. For example,
Use Sage to evaluate the product
\begin{equation*}
\left[
\begin{array}{rrrr}
1 \amp 2 \amp 0 \amp 1 \\
2 \amp 4 \amp 3 \amp 2 \\
1 \amp 2 \amp 6 \amp 1 \\
\end{array}
\right]
\left[
\begin{array}{r}
3 \\ 1 \\ 1 \\ 1 \\
\end{array}
\right]
\end{equation*}
from
Item a of the previous activity.
In Sage, define the matrix and vectors
\begin{equation*}
A = \left[
\begin{array}{rrr}
2 \amp 0 \\
3 \amp 1 \\
4 \amp 2 \\
\end{array}
\right],
\zerovec = \left[
\begin{array}{r} 0 \\ 0 \end{array}
\right],
\vvec = \left[
\begin{array}{r} 2 \\ 3 \end{array}
\right],
\wvec = \left[
\begin{array}{r} 1 \\ 2 \end{array}
\right]\text{.}
\end{equation*}
What do you find when you evaluate \(A\zerovec\text{?}\)
What do you find when you evaluate \(A(3\vvec)\) and \(3(A\vvec)\) and compare your results?
What do you find when you evaluate \(A(\vvec+\wvec)\) and \(A\vvec + A\wvec\) and compare your results?
This activity demonstrates several general properties satisfied by matrix multiplication that we record here.
Proposition 2.2.3. Linearity of matrix multiplication.
If \(A\) is a matrix, \(\vvec\) and \(\wvec\) vectors of the appropriate dimensions, and \(c\) a scalar, then
\(A\zerovec = \zerovec\text{.}\)
\(A(c\vvec) = cA\vvec\text{.}\)
\(A(\vvec+\wvec) = A\vvec + A\wvec\text{.}\)
Subsection 2.2.3 Matrixvector multiplication and linear systems
So far, we have begun with a matrix \(A\) and a vector \(\xvec\) and formed their product \(A\xvec = \bvec\text{.}\) We would now like to turn this around: beginning with a matrix \(A\) and a vector \(\bvec\text{,}\) we will ask if we can find a vector \(\xvec\) such that \(A\xvec = \bvec\text{.}\) This will naturally lead back to linear systems.
To see the connection between the matrix equation \(A\xvec =
\bvec\) and linear systems, let’s write the matrix \(A\) in terms of its columns \(\vvec_i\) and \(\xvec\) in terms of its components.
\begin{equation*}
A = \left[
\begin{array}{rrrr}
\vvec_1 \amp \vvec_2 \amp \ldots \vvec_n
\end{array}
\right],
\xvec = \left[
\begin{array}{c}
c_1 \\ c_2 \\ \vdots \\ c_n \\
\end{array}
\right]\text{.}
\end{equation*}
We know that the matrix product \(A\xvec\) forms a linear combination of the columns of \(A\text{.}\) Therefore, the equation \(A\xvec = \bvec\) is merely a compact way of writing the equation for the weights \(c_i\text{:}\)
\begin{equation*}
c_1\vvec_1 + c_2\vvec_2 + \ldots + c_n\vvec_n = \bvec\text{.}
\end{equation*}
We have seen this equation before: Remember that
Proposition 2.1.12 says that the solutions of this equation are the same as the solutions to the linear system whose augmented matrix is
\begin{equation*}
\left[\begin{array}{rrrrr}
\vvec_1 \amp \vvec_2 \amp \ldots \amp \vvec_n \amp \bvec
\end{array}\right]\text{.}
\end{equation*}
This gives us three different ways of looking at the same solution space.
Proposition 2.2.4.
If \(A=\left[\begin{array}{rrrr}
\vvec_1\amp\vvec_2\amp\ldots\vvec_n
\end{array}\right]\) and \(\xvec=\left[
\begin{array}{c}
x_1 \\ x_2 \\ \vdots \\ x_n \\
\end{array}\right]
\text{,}\) then the following statements are equivalent.
When the matrix \(A = \left[\begin{array}{rrrr}
\vvec_1\amp\vvec_2\amp\cdots\amp\vvec_n\end{array}\right]\text{,}\) we will frequently write
\begin{equation*}
\left[\begin{array}{rrrrr}
\vvec_1\amp\vvec_2\amp\cdots\amp\vvec_n\amp\bvec\end{array}\right]
= \left[ \begin{array}{rr} A \amp \bvec \end{array}\right]
\end{equation*}
and say that the matrix \(A\) is augmented by the vector \(\bvec\text{.}\)
The equation \(A\xvec = \bvec\) gives a notationally compact way to write a linear system. Moreover, this notation will allow us to focus on important features of the system that determine its solution space.
Example 2.2.5.
We will describe the solution space of the equation
\begin{equation*}
\left[\begin{array}{rrr}
2 \amp 0 \amp 2 \\
4 \amp 1 \amp 6 \\
1 \amp 3 \amp 5 \\
\end{array}\right]
\xvec
=
\left[\begin{array}{r}
0 \\ 5 \\ 15
\end{array}\right].
\end{equation*}
\begin{equation*}
x_1\left[\begin{array}{r}2\\4\\1\end{array}\right] +
x_2\left[\begin{array}{r}0\\1\\3\end{array}\right]+
x_3\left[\begin{array}{r}2\\6\\5\end{array}\right]=
\left[\begin{array}{r}0\\5\\15\end{array}\right]\text{,}
\end{equation*}
which is the linear system corresponding to the augmented matrix
\begin{equation*}
\left[\begin{array}{rrrr}
2 \amp 0 \amp 2 \amp 0 \\
4 \amp 1 \amp 6 \amp 5 \\
1 \amp 3 \amp 5 \amp 15 \\
\end{array} \right]\text{.}
\end{equation*}
The reduced row echelon form of the augmented matrix is
\begin{equation*}
\left[\begin{array}{rrrr}
2 \amp 0 \amp 2 \amp 0 \\
4 \amp 1 \amp 6 \amp 5 \\
1 \amp 3 \amp 5 \amp 15 \\
\end{array} \right]
\sim
\left[\begin{array}{rrrr}
1 \amp 0 \amp 1 \amp 0 \\
0 \amp 1 \amp 2 \amp 5 \\
0 \amp 0 \amp 0 \amp 0 \\
\end{array} \right],
\end{equation*}
which corresponds to the linear system
\begin{equation*}
\begin{alignedat}{4}
x_1 \amp \amp \amp {}+{} \amp x_3 \amp {}={} \amp 0 \\
\amp \amp x_2 \amp {}{} \amp 2x_3 \amp {}={} \amp 5. \\
\end{alignedat}
\end{equation*}
The variable \(x_3\) is free so we may write the solution space parametrically as
\begin{equation*}
\begin{aligned}
x_1 \amp {}={} x_3 \\
x_2 \amp {}={} 5+2x_3. \\
\end{aligned}
\end{equation*}
Since we originally asked to describe the solutions to the equation \(A\xvec = \bvec\text{,}\) we will express the solution in terms of the vector \(\xvec\text{:}\)
\begin{equation*}
\xvec
=\left[
\begin{array}{r}
x_1 \\ x_2 \\ x_3
\end{array}
\right]
=
\left[
\begin{array}{r}
x_3 \\ 5 + 2x_3 \\ x_3
\end{array}
\right]
=\left[\begin{array}{r}0\\5\\0\end{array}\right]
+x_3\left[\begin{array}{r}1\\2\\1\end{array}\right]
\end{equation*}
As before, we call this a parametric description of the solution space.
This shows that the solutions \(\xvec\) may be written in the form \(\vvec + x_3\wvec\text{,}\) for appropriate vectors \(\vvec\) and \(\wvec\text{.}\) Geometrically, the solution space is a line in \(\real^3\) through \(\vvec\) moving parallel to \(\wvec\text{.}\)
Activity 2.2.4. The equation \(A\xvec = \bvec\).
Consider the linear system
\begin{equation*}
\begin{alignedat}{4}
2x \amp {}+{} \amp y \amp {}{} \amp 3z \amp {}={} \amp 4 \\
x \amp {}+{} \amp 2y \amp {}+{} \amp z \amp {}={} \amp 3 \\
3x \amp {}{} \amp y \amp \amp \amp {}={} \amp 4. \\
\end{alignedat}
\end{equation*}
Identify the matrix \(A\) and vector \(\bvec\) to express this system in the form \(A\xvec = \bvec\text{.}\)
If \(A\) and \(\bvec\) are as below, write the linear system corresponding to the equation \(A\xvec=\bvec\) and describe its solution space, using a parametric description if appropriate:
\begin{equation*}
A = \left[\begin{array}{rrr}
3 \amp 1 \amp 0 \\
2 \amp 0 \amp 6
\end{array}
\right],~~~
\bvec = \left[\begin{array}{r}
6 \\ 2
\end{array}
\right].
\end{equation*}
Describe the solution space of the equation
\begin{equation*}
\left[
\begin{array}{rrrr}
1 \amp 2 \amp 0 \amp 1 \\
2 \amp 4 \amp 3 \amp 2 \\
1 \amp 2 \amp 6 \amp 1 \\
\end{array}
\right]
\xvec
=
\left[\begin{array}{r}
1 \\ 1 \\ 5
\end{array}
\right]\text{.}
\end{equation*}
Suppose \(A\) is an \(m\times n\) matrix. What can you guarantee about the solution space of the equation \(A\xvec
= \zerovec\text{?}\)
Subsection 2.2.4 Matrixmatrix products
In this section, we have developed some algebraic operations on matrices with the aim of simplifying our description of linear systems. We now introduce a final operation, the product of two matrices, that will become important when we study linear transformations in
Section 2.5.
Definition 2.2.6. Matrixmatrix multiplication.
Given matrices \(A\) and \(B\text{,}\) we form their product \(AB\) by first writing \(B\) in terms of its columns
\begin{equation*}
B = \left[\begin{array}{rrrr}
\vvec_1 \amp \vvec_2 \amp \cdots \amp \vvec_p
\end{array}\right]
\end{equation*}
and then defining
\begin{equation*}
AB = \left[\begin{array}{rrrr}
A\vvec_1 \amp A\vvec_2 \amp \cdots \amp A\vvec_p
\end{array}\right].
\end{equation*}
Example 2.2.7.
Given the matrices
\begin{equation*}
A = \left[\begin{array}{rr}
4 \amp 2 \\
0 \amp 1 \\
3 \amp 4 \\
2 \amp 0 \\
\end{array}\right],~~~
B = \left[\begin{array}{rrr}
2 \amp 3 \amp 0 \\
1 \amp 2 \amp 2 \\
\end{array}\right]\text{,}
\end{equation*}
we have
\begin{equation*}
AB = \left[\begin{array}{rrr}
A \twovec{2}{1} \amp
A \twovec{3}{2} \amp
A \twovec{0}{2}
\end{array}\right]
= \left[\begin{array}{rrr}
6 \amp 16 \amp 4 \\
1 \amp 2 \amp 2 \\
10 \amp 1 \amp 8 \\
4 \amp 6 \amp 0
\end{array}\right]\text{.}
\end{equation*}
Activity 2.2.5.
Consider the matrices
\begin{equation*}
A = \left[\begin{array}{rrr}
1 \amp 3 \amp 2 \\
3 \amp 4 \amp 1 \\
\end{array}\right],~~~
B = \left[\begin{array}{rr}
3 \amp 0 \\
1 \amp 2 \\
2 \amp 1 \\
\end{array}\right]\text{.}
\end{equation*}
Before computing, first explain why the shapes of \(A\) and \(B\) enable us to form the product \(AB\text{.}\) Then describe the shape of \(AB\text{.}\)
Compute the product \(AB\text{.}\)
Sage can multiply matrices using the
*
operator. Define the matrices
\(A\) and
\(B\) in the Sage cell below and check your work by computing
\(AB\text{.}\)
Are we able to form the matrix product \(BA\text{?}\) If so, use the Sage cell above to find \(BA\text{.}\) Is it generally true that \(AB = BA\text{?}\)
Suppose we form the three matrices.
\begin{equation*}
A = \left[\begin{array}{rr}
1 \amp 2 \\
3 \amp 2 \\
\end{array}\right],
B = \left[\begin{array}{rr}
0 \amp 4 \\
2 \amp 1 \\
\end{array}\right],
C = \left[\begin{array}{rr}
1 \amp 3 \\
4 \amp 3 \\
\end{array}\right]\text{.}
\end{equation*}
Compare what happens when you compute
\(A(B+C)\) and
\(AB
+ AC\text{.}\) State your finding as a general principle.
Compare the results of evaluating \(A(BC)\) and \((AB)C\) and state your finding as a general principle.
When we are dealing with real numbers, we know if \(a\neq 0\) and \(ab = ac\text{,}\) then \(b=c\text{.}\) Define matrices
\begin{equation*}
A = \left[\begin{array}{rr}
1 \amp 2 \\
2 \amp 4 \\
\end{array}\right],
B = \left[\begin{array}{rr}
3 \amp 0 \\
1 \amp 3 \\
\end{array}\right],
C = \left[\begin{array}{rr}
1 \amp 2 \\
2 \amp 2 \\
\end{array}\right]
\end{equation*}
and compute
\(AB\) and
\(AC\text{.}\) If
\(AB = AC\text{,}\) is it necessarily true that
\(B = C\text{?}\)
Again, with real numbers, we know that if \(ab =
0\text{,}\) then either \(a = 0\) or \(b=0\text{.}\) Define
\begin{equation*}
A = \left[\begin{array}{rr}
1 \amp 2 \\
2 \amp 4 \\
\end{array}\right],
B = \left[\begin{array}{rr}
2 \amp 4 \\
1 \amp 2 \\
\end{array}\right]
\end{equation*}
and compute
\(AB\text{.}\) If
\(AB = 0\text{,}\) is it necessarily true that either
\(A=0\) or
\(B=0\text{?}\)
This activity demonstrated some general properties about products of matrices, which mirror some properties about operations with real numbers.
Properties of Matrixmatrix Multiplication.
If \(A\text{,}\) \(B\text{,}\) and \(C\) are matrices such that the following operations are defined, it follows that
 Associativity:
\(A(BC) = (AB)C\text{.}\)
 Distributivity:

\(A(B+C) = AB+AC\text{.}\)
\((A+B)C = AC+BC\text{.}\)
At the same time, there are a few properties that hold for real numbers that do not hold for matrices.
Caution.
The following properties hold for real numbers but not for matrices.
 Commutativity:
It is not generally true that \(AB = BA\text{.}\)
 Cancellation:
It is not generally true that \(AB = AC\) implies that \(B = C\text{.}\)
 Zero divisors:
It is not generally true that \(AB = 0\) implies that either \(A=0\) or \(B=0\text{.}\)