Linearly independent vectors with examples


A set of vectors is linearly independent when none of the vectors can be written as a linear combination of the other vectors. This applies to vectors in \(\mathbb{R}^n\) for any \(n\) or vector spaces like the polynomial spaces. The more formal definition along with some examples are reviewed below. We will see how to determine if a set of vectors is linearly independent or dependent using the definition or theorems.

[adsenseWide]

The formal definition of linear independence

A set of vectors is linearly independent if and only if the equation:

\(c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_k\vec{v}_k = \vec{0}\)

has only the trivial solution. What that means is that these vectors are linearly independent when \(c_1 = c_2 = \cdots = c_k = 0\) is the only possible solution to that vector equation.

If a set of vectors is not linearly independent, we say that they are linearly dependent. Then, you can write a linear dependence relation showing how one vector is a combination of the others.

Examples of determining when vectors are linearly independent

Let’s stick to \(\mathbb{R}^n\) for now and look at how to determine if those vectors are linearly independent or not. Let’s get into the first example!

Example

Are the vectors \(\vec{v}_1 = \left[\begin{array}{c} 1\\ 4\\ 0\\\end{array}\right], \vec{v}_2 = \left[\begin{array}{c} 10\\ 2\\ 1\\\end{array}\right], \vec{v}_3 = \left[\begin{array}{c} -5\\ 0\\ 6\\\end{array}\right]\) linearly independent?

Solution

This is asking “does the following equation have only the trivial solution (all 0’s)?”

\(c_1\left[\begin{array}{c} 1\\ 4\\ 0\\\end{array}\right] + c_2\left[\begin{array}{c} 10\\ 2\\ 1\\\end{array}\right] + c_3\left[\begin{array}{c} -5\\ 0\\ 6\\\end{array}\right] = \left[\begin{array}{c} 0\\ 0 \\ 0 \\ \end{array}\right]\)

Every vector equation (in the real spaces) is equivalent to a matrix equation and an augmented matrix. An augmented matrix would be most useful here. We will need to row reduce using the calculator.

\(\left[\begin{array}{ccc|c} 1 & 10 & -5 & 0\\ 4 & 2 & 0 & 0\\ 0 & 1 & 6 & 0\\ \end{array}\right] \sim \left[\begin{array}{ccc|c} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ \end{array}\right] \)

As you can see from this matrix, the solution to the vector equation (since it is equivalent) is \(c_1 = c_2 = c_3 = 0\), the trivial solution. We know this is the only solution because there are no free variables in the equation. Since the only solution to the vector equation is the trivial solution, these vectors are linearly independent.

Example

Are the vectors \(\vec{v}_1 = \left[\begin{array}{c} 6\\ 2\\ 9\\ 1 \\\end{array}\right], \vec{v}_2 = \left[\begin{array}{c} 12\\ 4\\ 18\\ 2\\\end{array}\right], \vec{v}_3 = \left[\begin{array}{c} 8\\ 0\\ 0\\ 0\\\end{array}\right], \vec{v}_4 = \left[\begin{array}{c} -2\\ 1\\ 1\\ 5\\\end{array}\right]\) linearly independent?

Solution

You may notice that vector 2 is a multiple of vector 1. This means the vectors are not linearly independent but instead are linearly dependent. But how would this look in a matrix? Let’s take a look.

\(\left[\begin{array}{cccc|c} 6 & 12 & 8 & -2 & 0\\ 2 & 4 & 0 & 1 & 0\\ 9 & 18 & 0 & 1 & 0 \\ 1 & 2 & 0 & 5 & 0 \\ \end{array}\right] \sim \left[\begin{array}{cccc|c} 1 & 2 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array}\right]\)

We have a column (column 2) which is not a pivot column. This means that we have a free variable in the vector equation underlying this matrix. That means there are infinitely many solutions i.e., nontrivial solutions (solutions where the constants are not all zero). Therefore these vectors are linearly dependent.

Properties of linearly independent vectors

While you can always use an augmented matrix in the real spaces, you can also use several properties of linearly independent vectors. We will use these without proofs, which can be found in most linear algebra textbooks.

  • A set with one vector is linearly independent.
  • A set of two vectors is linearly dependent if one vector is a multiple of the other.

    \(\left[\begin{array}{c} 1 \\ 4\\ \end{array}\right]\) and \(\left[\begin{array}{c} -2 \\ -8\\ \end{array}\right]\) are linearly dependent since they are multiples.

    \(\left[\begin{array}{c} 9 \\ -1\\ \end{array}\right]\) and \(\left[\begin{array}{c} 18 \\ 6\\ \end{array}\right]\) are linearly independent since they are not multiples.

  • Any set containing the zero vector is a linearly dependent set.
    This is because you can write any constant in front of the zero vector to get a nontrivial solution to the vector equation from the definition.

    The vectors:

    \(\left[\begin{array}{c} 5 \\ 0\\ \end{array}\right]\), \(\left[\begin{array}{c} 2 \\ 6\\ \end{array}\right]\), \(\left[\begin{array}{c} 0 \\ 0\\ \end{array}\right]\)

    are linearly dependent because the set contains the zero vector.

  • Any set where one vector is a linear combination of the others is linearly dependent.
    We saw this above. It is true for any size set.

  • A set with more vectors than entries in the vectors is linearly dependent.
    The vectors:

    \(\left[\begin{array}{c} 2 \\ 6\\ \end{array}\right]\), \(\left[\begin{array}{c} 1 \\ 1\\ \end{array}\right]\), \(\left[\begin{array}{c} -1 \\ 5\\ \end{array}\right]\)

    are linearly dependent because there are 3 vectors but each vector has 2 entries.

    Where do linearly independent vectors come into play?

    One big area where you will work with linearly independent vectors is when talking about the basis of a vector space. This is a linearly independent set that spans the vector space.

    Continue your study of linear algebra topics: https://www.mathbootcamps.com/linear-algebra-guides-and-articles/