Linear algebra owes its prominence as a powerful scientific tool to the evergrowing power of computers. Carl Cowen, a former president of the Mathematical Association of America, has said, “No serious application of linear algebra happens without a computer.” Indeed, Cowen notes that, in the 1950s, working with a system of 100 equations in 100 variables was difficult. Today, scientists and mathematicians routinely work on problems that are vastly larger. This is only possible because of today’s computing power.
It is therefore important for any student of linear algebra to become comfortable solving linear algebraic problems on a computer. This section will introduce you to a program called Sage that can help. While you may be able to do much of this work on a graphing calculator, you are encouraged to become comfortable with Sage as we will use increasingly powerful features as we encounter their need.
Subsection 1.3.2 Sage and matrices
When we encounter a matrix,
Theorem 1.2.6 tells us that there is exactly one reduced row echelon matrix that is row equivalent to it.
In fact, the uniqueness of this reduced row echelon matrix is what motivates us to define this particular form. When solving a system of linear equations using Gaussian elimination, there are other row equivalent matrices that reveal the structure of the solution space. The reduced row echelon matrix is simply a convenience as it is an agreement we make with one another to seek the same matrix.
An added benefit is that we can ask a computer program, like Sage, to find reduced row echelon matrices for us. We will learn how to do this now that we have a little familiarity with Sage.
First, notice that a matrix has a certain number of rows and columns. For instance, the matrix
\begin{equation*}
\left[
\begin{array}{rrrrr}
* \amp * \amp * \amp * \amp * \\
* \amp * \amp * \amp * \amp * \\
* \amp * \amp * \amp * \amp * \\
\end{array}
\right]
\end{equation*}
has three rows and five columns. We consequently refer to this as a \(3\times 5\) matrix.
We may ask Sage to create the \(2\times4\) matrix
\begin{equation*}
\left[
\begin{array}{rrrr}
1 \amp 0 \amp 2 \amp 7 \\
2 \amp 1 \amp 3 \amp 1 \\
\end{array}
\right]
\end{equation*}
by entering
When evaluated, Sage will confirm the matrix by writing out the rows of the matrix, each inside square brackets.
Notice that there are three separate things (we call them arguments) inside the parentheses: the number of rows, the number of columns, and the entries of the matrix listed by row inside square brackets. These three arguments are separated by commas. Notice that there is no way of specifying whether this is an augmented or coefficient matrix so it will be up to us to interpret our results appropriately.
Sage syntax.
Some common mistakes are
to forget the square brackets around the list of entries,
to omit an entry from the list or to add an extra one,
to forget to separate the rows, columns, and entries by commas, and
to forget the parentheses around the arguments after matrix
.
If you see an error message, carefully proofread your input and try again.
Alternatively, you can create a matrix by simply listing its rows, like this
matrix([ [1, 0, 2, 7],
[ 2, 1,3,1] ])
Activity 1.3.2. Using Sage to find row reduced echelon matrices.
Enter the following matrix into Sage.
\begin{equation*}
\left[
\begin{array}{rrrr}
1 \amp 2 \amp 2 \amp 1 \\
2 \amp 4 \amp 1 \amp 5 \\
1 \amp 2 \amp 0 \amp 3
\end{array}
\right]
\end{equation*}

Give the matrix the name \(A\) by entering
A = matrix( ..., ..., [ ... ])
We may then find its reduced row echelon form by entering
A = matrix( ..., ..., [ ... ])
A.rref()
A common mistake is to forget the parentheses after rref
.
Use Sage to find the reduced row echelon form of the matrix from
Item a of this activity.
Use Sage to describe the solution space of the system of linear equations
\begin{equation*}
\begin{alignedat}{5}
x_1 \amp \amp \amp \amp \amp {}+{} \amp 2x_4 \amp
{}={} \amp 4 \\
\amp \amp 3x_2 \amp {}+{} \amp x_3 \amp {}+{} \amp 2x_4
\amp {}={} \amp 3 \\
4x_1 \amp {}{} \amp 3x_2 \amp \amp \amp {}+{} \amp
x_4 \amp {}={} \amp 14 \\
\amp \amp 2x_2 \amp {}+{} \amp 2x_3 \amp {}+{} \amp x_4
\amp {}={} \amp 1 \\
\end{alignedat}
\end{equation*}

Consider the two matrices:
\begin{equation*}
\begin{array}{rcl}
A \amp = \amp \left[
\begin{array}{rrrr}
1 \amp 2 \amp 1 \amp 3 \\
2 \amp 4 \amp 1 \amp 1 \\
4 \amp 8 \amp 1 \amp 7 \\
\end{array}\right] \\
B \amp = \amp \left[
\begin{array}{rrrrrr}
1 \amp 2 \amp 1 \amp 3 \amp 0 \amp 3 \\
2 \amp 4 \amp 1 \amp 1 \amp 1 \amp 1 \\
4 \amp 8 \amp 1 \amp 7 \amp 3 \amp 2 \\
\end{array}\right] \\
\end{array}
\end{equation*}
We say that \(B\) is an augmentation of \(A\) because it is obtained from \(A\) by adding some more columns.
Using Sage, define the matrices and compare their reduced row echelon forms. What do you notice about the relationship between the two reduced row echelon forms?

Using the system of equations in
Item c, write the augmented matrix corresponding to the system of equations. What did you find for the reduced row echelon form of the augmented matrix?
Now write the coefficient matrix of this system of equations. What does
Item d of this activity tell you about its reduced row echelon form?
Sage practices.
Here are some practices that you may find helpful when working with matrices in Sage.
Break the matrix entries across lines, one for each row, for better readability by pressing Enter between rows.
A = matrix(2, 4, [ 1, 2, 1, 0,
3, 0, 4, 3 ])
Print your original matrix to check that you have entered it correctly. You may want to also print a dividing line to separate matrices.
A = matrix(2, 2, [ 1, 2,
2, 2])
print (A)
print ("")
A.rref()
The last part of the previous activity,
Item d, demonstrates something that will be helpful for us in the future. In that activity, we started with a matrix
\(A\text{,}\) which we augmented by adding some columns to obtain a matrix
\(B\text{.}\) We then noticed that the reduced row echelon form of
\(B\) is itself an augmentation of the reduced row echelon form of
\(A\text{.}\)
To illustrate, we can consider the reduced row echelon form of the augmented matrix:
\begin{equation*}
\left[
\begin{array}{cccc}
2 \amp 3 \amp 0 \amp 2 \\
1 \amp 4 \amp 1 \amp 3 \\
3 \amp 0 \amp 2 \amp 2 \\
1 \amp 5 \amp 3 \amp 7 \\
\end{array}
\right]
\sim
\left[
\begin{array}{cccc}
1 \amp 0 \amp 0 \amp 4 \\
0 \amp 1 \amp 0 \amp 2 \\
0 \amp 0 \amp 1 \amp 7 \\
0 \amp 0 \amp 0 \amp 0 \\
\end{array}
\right]
\end{equation*}
We can then determine the reduced row echelon form of the coefficient matrix by looking inside the augmented matrix.
\begin{equation*}
\left[
\begin{array}{ccc}
2 \amp 3 \amp 0 \\
1 \amp 4 \amp 1 \\
3 \amp 0 \amp 2 \\
1 \amp 5 \amp 3 \\
\end{array}
\right]
\sim
\left[
\begin{array}{ccc}
1 \amp 0 \amp 0 \\
0 \amp 1 \amp 0 \\
0 \amp 0 \amp 1 \\
0 \amp 0 \amp 0 \\
\end{array}
\right]
\end{equation*}
If we trace through the steps in the Gaussian elimination algorithm carefully, we see that this is a general principle, which we now state.
Proposition 1.3.1. Augmentation Principle.
If matrix \(B\) is an augmentation of matrix \(A\text{,}\) then the reduced row echelon form of \(B\) is an augmentation of the reduced row echelon form of \(A\text{.}\)
Subsection 1.3.3 Computational effort
At the beginning of this section, we indicated that linear algebra has become more prominent as computers have grown more powerful. Computers, however, still have limits. Let’s consider how much effort is expended when we ask to find the reduced row echelon form of a matrix. We will measure, very roughly, the effort by the number of times the algorithm requires us to multiply or add two numbers.
We will assume that our matrix has the same number of rows as columns, which we call \(n\text{.}\) We are mainly interested in the case when \(n\) is very large, which is when we need to worry about how much effort is required.
Let’s first consider the effort required for each of our row operations.
Scaling a row multiplies each of the \(n\) entries in a row by some number, which requires \(n\) operations.
Interchanging two rows requires no multiplications or additions so we won’t worry about the effort required by an interchange.
A replacement requires us to multiply each entry in a row by some number, which takes \(n\) operations, and then add the resulting entries to another row, which requires another \(n\) operations. The total number of operations is \(2n\text{.}\)
Our goal is to transform a matrix to its reduced row echelon form, which looks something like this:
\begin{equation*}
\left[
\begin{array}{cccc}
1 \amp 0 \amp \ldots \amp 0 \\
0 \amp 1 \amp \ldots \amp 0 \\
\vdots \amp \vdots \amp \ddots \amp 0 \\
0 \amp 0 \amp \ldots \amp 1 \\
\end{array}
\right]\text{.}
\end{equation*}
We roughly perform one replacement operation for every 0 entry in the reduced row echelon matrix. When \(n\) is very large, most of the \(n^2\) entries in the reduced row echelon form are 0 so we need roughly \(n^2\) replacements. Since each replacement operation requires \(2n\) operations, the number of operations resulting from the needed replacements is roughly \(n^2(2n) =
2n^3\text{.}\)
Each row is scaled roughly one time so there are roughly \(n\) scaling operations, each of which requires \(n\) operations. The number of operations due to scaling is roughly \(n^2\text{.}\)
Therefore, the total number of operations is roughly
\begin{equation*}
2n^3 + n^2\text{.}
\end{equation*}
When \(n\) is very large, the \(n^2\) term is much smaller than the \(n^3\) term. We therefore state that
This is a very rough measure of the effort required to find the reduced row echelon form; a more careful accounting shows that the number of arithmetic operations is roughly \(\frac23
n^3\text{.}\) As we have seen, some matrices require more effort than others, but the upshot of this observation is that the effort is proportional to \(n^3\text{.}\) We can think of this in the following way: If the size of the matrix grows by a factor of 10, then the effort required grows by a factor of \(10^3 = 1000\text{.}\)
While today’s computers are powerful, they cannot handle every problem we might ask of them. Eventually, we would like to be able to consider matrices that have \(n=10^{12}\) (a trillion) rows and columns. In very broad terms, the effort required to find the reduced row echelon matrix will require roughly \((10^{12})^3 = 10^{36}\) operations.
To put this into context, imagine we need to solve a linear system with a trillion equations and a trillion variables and that we have a computer that can perform a trillion, \(10^{12}\text{,}\) operations every second. Finding the reduced row echelon form would take about \(10^{16}\) years. At this time, the universe is estimated to be approximately \(10^{10}\) years old. If we started the calculation when the universe was born, we’d be about onemillionth of the way through.
This may seem like an absurd situation, but we’ll see in
Subsection 4.5.3 how we use the results of such a computation every day. Clearly, we will need some better tools to deal with
really big problems like this one.