In the previous section, questions about the existence of solutions of a linear system led to the concept of the span of a set of vectors. In particular, the span of a set of vectors \(\vvec_1,\vvec_2,\ldots,\vvec_n\) is the set of vectors \(\bvec\) for which a solution to the linear system \(\left[\begin{array}{rrrr}
\vvec_1\amp\vvec_2\amp\ldots\amp\vvec_n \end{array}\right] ~\xvec
= \bvec \) exists.
In this section, we turn to the uniqueness of solutions of a linear system, the second of our two fundamental questions. This will lead us to the concept of linear independence.
Letβs begin by looking at some sets of vectors in \(\real^3\text{.}\) As we saw in the previous section, the span of a set of vectors in \(\real^3\) will be either a line, a plane, or \(\real^3\) itself.
We have seen examples where the span of a set of three vectors in \(\real^3\) is \(\real^3\) and other examples where the span of three vectors is a plane. We would like to understand the difference between these two situations.
In other words, any linear combination of \(\wvec_1\text{,}\)\(\wvec_2\text{,}\) and \(\wvec_3\) may be written as a linear combination using only the vectors \(\wvec_1\) and \(\wvec_2\text{.}\) Since the span of a set of vectors is simply the set of their linear combinations, this shows that
Before exploring this type of behavior more generally, letβs think about it from a geometric point of view. Suppose that we begin with the two vectors \(\vvec_1\) and \(\vvec_2\) in ExampleΒ 2.4.1. The span of these two vectors is a plane in \(\real^3\text{,}\) as seen on the left of FigureΒ 2.4.3.
Because the vector \(\vvec_3\) is not a linear combination of \(\vvec_1\) and \(\vvec_2\text{,}\) it provides a direction to move that is independent of \(\vvec_1\) and \(\vvec_2\text{.}\) Adding this third vector \(\vvec_3\) therefore forms a set whose span is \(\real^3\text{,}\) as seen on the right of FigureΒ 2.4.3.
Similarly, the span of the vectors \(\wvec_1\) and \(\wvec_2\) in ExampleΒ 2.4.2 is also a plane. However, the third vector \(\wvec_3\) is a linear combination of \(\wvec_1\) and \(\wvec_2\text{,}\) which means that it already lies in the plane formed by \(\wvec_1\) and \(\wvec_2\text{,}\) as seen in FigureΒ 2.4.4. Since we can already move in this direction using just \(\wvec_1\) and \(\wvec_2\text{,}\) adding \(\wvec_3\) to the set does not change the span. As a result, it remains a plane.
What distinguishes these two examples is whether one of the vectors is a linear combination of the others, an observation that leads to the following definition.
A set of vectors is called linearly dependent if one of the vectors is a linear combination of the others. Otherwise, the set of vectors is called linearly independent.
Is it possible to write one of the vectors \(\vvec_1,\vvec_2,\ldots,\vvec_5\) as a linear combination of the others? If so, show explicitly how one vector appears as a linear combination of some of the other vectors. Is this set of vectors linearly dependent or independent?
Is it possible to write one of these vectors \(\wvec_1\text{,}\)\(\wvec_2\text{,}\)\(\wvec_3\) as a linear combination of the others? If so, show explicitly how one vector appears as a linear combination of some of the other vectors. Is this set of vectors linearly dependent or independent?
By now, we should expect that the pivot positions play an important role in determining whether the columns of a matrix are linearly dependent. For instance, suppose we have four vectors and their associated matrix
More generally, the same reasoning implies that a set of vectors is linearly dependent if the associated matrix has a column without a pivot position. Indeed, as illustrated here, a vector corresponding to a column without a pivot position can be expressed as a linear combination of the vectors whose columns do contain pivot positions.
Viewing this as an augmented matrix again, we see that the linear system is inconsistent since there is a pivot in the rightmost column, which means that \(\wvec_4\) cannot be expressed as a linear combination of the other vectors. Similarly, \(\wvec_3\) cannot be expressed as a linear combination of \(\wvec_1\) and \(\wvec_2\text{.}\) In fact, none of the vectors can be written as a linear combination of the others so this set of vectors is linearly independent.
This condition imposes a constraint on how many vectors we can have in a linearly independent set. Here is an example of the reduced row echelon form of a matrix whose columns form a set of three linearly independent vectors in \(\real^5\text{:}\)
More generally, if \(\vvec_1,\vvec_2,\ldots,\vvec_n\) is a linearly independent set of vectors in \(\real^m\text{,}\) the associated matrix must have a pivot position in every column. Since every row contains at most one pivot position, the number of columns can be no greater than the number of rows. This means that the number of vectors in a linearly independent set can be no greater than the number of dimensions.
This says, for instance, that any linearly independent set of vectors in \(\real^3\) can contain no more three vectors. We usually imagine three independent directions, such as up/down, front/back, left/right, in our three-dimensional world. This proposition tells us that there can be no more independent directions.
The proposition above says that a set of vectors in \(\real^m\) that is linear independent has at most \(m\) vectors. By comparison, PropositionΒ 2.3.15 says that a set of vectors whose span is \(\real^m\) has at least \(m\) vectors.
If \(A\) is a matrix, we call the equation \(A\xvec =
\zerovec\) a homogeneous equation. As weβll see, the uniqueness of solutions to this equation reflects on the linear independence of the columns of \(A\text{.}\)
whose columns we denote by \(\vvec_1\text{,}\)\(\vvec_2\text{,}\) and \(\vvec_3\text{.}\) Describe the solution space of the homogeneous equation \(A\xvec = \zerovec\) using a parametric description, if appropriate.
This activity shows how the solution space of the homogeneous equation \(A\xvec = \zerovec\) indicates whether the columns of \(A\) are linearly dependent or independent. First, we know that the equation \(A\xvec = \zerovec\) always has at least one solution, the vector \(\xvec = \zerovec\text{.}\) Any other solution is a nonzero solution.
Therefore, \(A\) has a column without a pivot position, which tells us that the vectors \(\vvec_1\text{,}\)\(\vvec_2\text{,}\) and \(\vvec_3\) are linearly dependent. However, we can also see this fact in another way.
The reduced row echelon matrix tells us that the homogeneous equation has a free variable so that there must be infinitely many solutions. In particular, we have
If we choose \(x_3=1\text{,}\) then we obtain the nonzero solution to the homogeneous equation \(\xvec =
\threevec{-1}{-1}1\text{,}\) which implies that
As this example demonstrates, there are many ways we can view the question of linear independence, some of which are recorded in the following proposition.
For a matrix \(A = \left[\begin{array}{rrrr}
\vvec_1\amp\vvec_2\amp\ldots\amp\vvec_n
\end{array}\right]
\text{,}\) the following statements are equivalent:
A set of vectors \(\vvec_1,\vvec_2,\ldots,\vvec_n\) is linearly dependent if there are weights \(c_1,c_2,\ldots,c_n\text{,}\) not all of which are zero, such that
At the beginning of the section, we said that this concept addressed the second of our two fundamental questions concerning the uniqueness of solutions to a linear system. It is worth comparing the results of this section with those of the previous one so that the parallels between them become clear.
The vectors \(\vvec_1, \vvec_2, \ldots, \vvec_n\) are linearly independent if \(\xvec=\zerovec\) is the unique solution to \(A\xvec = \zerovec\text{.}\)
Suppose \(A=\left[\begin{array}{rrrr}
\vvec_1\amp\vvec_2\amp\vvec_3\amp\vvec_4
\end{array}\right]\text{.}\) Find a nonzero solution to the homogenous equation \(A\xvec = \zerovec\text{.}\)
Suppose that \(\bvec\) is a vector in \(\real^3\text{.}\) Explain why we can guarantee that \(\bvec\) may be written as a linear combination of \(\vvec_1\text{,}\)\(\vvec_2\text{,}\) and \(\vvec_3\text{.}\)
Suppose that \(\bvec\) is a vector in \(\real^3\text{.}\) In how many ways can \(\bvec\) be written as a linear combination of \(\vvec_1\text{,}\)\(\vvec_2\text{,}\) and \(\vvec_3\text{?}\)
Suppose that \(\vvec_1,\vvec_2,\ldots,\vvec_n\) is a linearly independent set of vectors. What can you say about the linear independence or dependence of a subset of these vectors?
Suppose \(\vvec_1,\vvec_2,\ldots,\vvec_n\) is a linearly independent set of vectors that form the columns of a matrix \(A\text{.}\) If the equation \(A\xvec = \bvec\) is inconsistent, what can you say about the linear independence or dependence of the set of vectors \(\vvec_1,\vvec_2,\ldots,\vvec_n,\bvec\text{?}\)
Suppose we have a set of vectors \(\vvec_1,\vvec_2,\ldots,\vvec_n\) and that \(\vvec_2\) is a scalar multiple of \(\vvec_1\text{.}\) Then the set is linearly dependent.
Suppose that \(\vvec_1,\vvec_2,\ldots,\vvec_n\) are linearly independent and form the columns of a matrix \(A\text{.}\) If \(A\xvec = \bvec\) is consistent, then there is exactly one solution.
Assume that the vectors are both linearly independent and span \(\real^{27}\text{.}\) Given a vector \(\bvec\) in \(\real^{27}\text{,}\) what can you say about the solution space to the equation \(A\xvec = \bvec\text{?}\)
Given below are some descriptions of sets of vectors that form the columns of a matrix \(A\text{.}\) For each description, give a possible reduced row echelon form for \(A\) or indicate why there is no set of vectors satisfying the description by stating why the required reduced row echelon matrix cannot exist.
A set of 4 linearly independent vectors in \(\real^5\text{.}\)
When we explored matrix multiplication in SectionΒ 2.2, we saw that some properties that are true for real numbers are not true for matrices. This exercise will investigate that in some more depth.
Suppose that \(A\) and \(B\) are two matrices and that \(AB = 0\text{.}\) If \(B \neq 0\text{,}\) what can you say about the linear independence of the columns of \(A\text{?}\)
Suppose that we have matrices \(A\text{,}\)\(B\) and \(C\) such that \(AB = AC\text{.}\) We have seen that we cannot generally conclude that \(B=C\text{.}\) If we assume additionally that \(A\) is a matrix whose columns are linearly independent, explain why \(B = C\text{.}\) You may wish to begin by rewriting the equation \(AB = AC\) as \(AB-AC = A(B-C) = 0\text{.}\)
Given a set of linearly dependent vectors, we can eliminate some of the vectors to create a smaller, linearly independent set of vectors.
Suppose that \(\wvec\) is a linear combination of the vectors \(\vvec_1\) and \(\vvec_2\text{.}\) Explain why \(\laspan{\vvec_1,\vvec_2, \wvec} =
\laspan{\vvec_1,\vvec_2}\text{.}\)
Write one of the vectors as a linear combination of the others. Find a set of three vectors whose span is the same as \(\laspan{\vvec_1,\vvec_2,\vvec_3,\vvec_4}\text{.}\)
Are the three vectors you are left with linearly independent? If not, express one of the vectors as a linear combination of the others and find a set of two vectors whose span is the same as \(\laspan{\vvec_1,\vvec_2,\vvec_3,\vvec_4}\text{.}\)