Did you know there are three ways to view a linear combination of vectors?
Linear combinations of vectors can be seen as:
- System of Linear Equations
- Vector Equation
- Matrix Equation
But before we can see how all three of these are related, we need a vital definition that allows us to write our system more compactly.
Consider an \(m \times n\) matrix \(\mathrm{A}\), with columns \(\overrightarrow{a_{1}}, \ldots, \overrightarrow{a_{n}}\), and \(\vec{x}\) in \(\mathbb{R}^{n}\).
Matrix Multiplication and Linear Combinations
The product of \(\mathrm{A}\) and \(\vec{x}\), denoted \(A \vec{x}\), is the linear combination of the columns of \(\mathrm{A}\) using corresponding entries in \(\vec{x}\) as weights.
\begin{align*}
A \vec{x}=\left[\begin{array}{llll}
\overrightarrow{a_{1}} & \overrightarrow{a_{2}} & \cdots & \overrightarrow{a_{n}}
\end{array}\right]\left[\begin{array}{c}
x_{1} \\
\vdots \\
x_{n}
\end{array}\right]=x_{1} \overrightarrow{a_{1}}+x_{2} \overrightarrow{a_{2}}+\cdots+x_{n} \overrightarrow{a_{n}}
\end{align*}
Note that \(A \vec{x}\) is only defined if the number of columns of \(\mathrm{A}\) equals the number of entries in \(\mathrm{x}\).
Why?
Because a matrix times a matrix is defined as long as the number of columns of the first matrix (A) equals the number of rows of the second matrix (x).
We will review matrix operations in a future lesson, but for the time being, all you need to know is that the middle values must be the same to multiply:
\begin{equation}
(m \times n) \cdot(n \times r)=m \times r
\end{equation}
Let’s compute the product of the following matrices using our definition above.
\begin{align*}
\left[\begin{array}{ccc}
8 & 3 & -4 \\
5 & 1 & 2
\end{array}\right]\left[\begin{array}{l}
1 \\
2 \\
3
\end{array}\right]
\end{align*}
First, let’s take a second to see what we are given and label our components by taking the second matrix and multiplying each entry by the columns of \(\mathrm{A}\).
\begin{equation}
\left[\begin{array}{ccc}
8 & 3 & -4 \\
5 & 1 & 2
\end{array}\right]\left[\begin{array}{l}
1 \\
2 \\
3
\end{array}\right]=1\left[\begin{array}{l}
8 \\
5
\end{array}\right]+2\left[\begin{array}{l}
3 \\
1
\end{array}\right]+3\left[\begin{array}{c}
-4 \\
2
\end{array}\right]
\end{equation}
Now, we have to distribute, add, and simplify.
\begin{aligned}
\overrightarrow{v} &= 1\left[\begin{array}{l}
8 \\
5
\end{array}\right]+2\left[\begin{array}{l}
3 \\
1
\end{array}\right]+3\left[\begin{array}{c}
-4 \\
2
\end{array}\right] \\
&= \left[\begin{array}{l}
8 \\
5
\end{array}\right]+\left[\begin{array}{l}
6 \\
2
\end{array}\right]+\left[\begin{array}{c}
-12 \\
6
\end{array}\right] \\
&= \left[\begin{array}{c}
8+6-12 \\
5+2+6
\end{array}\right]=\left[\begin{array}{c}
2 \\
13
\end{array}\right]
\end{aligned}
See, it’s easy!
Okay, but while it’s great how we can transform a product into a linear combination of column vectors, how does this help us solve a system of equations?
Connecting Systems of Equations with Matrix Equations
Well, this theorem does more than just convert products. It allows us to represent systems of equations in three ways, as seen in the example below.
\begin{equation}
\begin{array}{c|c|c}
\text { System of Equation } & \text { Vector Equation } & \text { Matrix Equation } \\
\hline\left\{\begin{array}{l}
x+2 y=5 \\
2 x-y=3
\end{array}\right. & x\left(\begin{array}{l}
1 \\
2
\end{array}\right)+y\left(\begin{array}{c}
2 \\
-1
\end{array}\right)=\left(\begin{array}{l}
5 \\
3
\end{array}\right) & \left(\begin{array}{cc}
1 & 2 \\
2 & -1
\end{array}\right)\left(\begin{array}{l}
x \\
y
\end{array}\right)=\left(\begin{array}{l}
5 \\
3
\end{array}\right)
\end{array}
\end{equation}
But here’s what’s so cool about translating a system of linear equations into either a linear combination of vectors or a matrix equation – it simplifies our ability to solve the system.
Row Reduction and Consistency
More importantly, we can use row reduction (RREF) to help us solve the system!
In fact, consider \(\mathrm{A}\), an \(m \times n\) matrix, with columns \(\vec{a}_{1}, \ldots, \overrightarrow{a_{n}}\). Let \(\vec{b}\) be in \(\mathbb{R}^{m}\).
The matrix equation \(A \vec{x}=\vec{b}\) has the same solution set as the vector equation:
\[x_{1} \vec{a}_{1}+x_{2} \vec{a}_{2}+\cdots+x_{n} \overrightarrow{a_{n}}=\vec{b}\]
This vector equation, in turn, has the same solution set as the linear equations whose augmented matrix is:
\[\left[\begin{array}{lllll}\overrightarrow{a_{1}} & \overrightarrow{a_{2}} & \cdots & \overrightarrow{a_{n}} & \vec{b}\end{array}\right]\]
Let’s look at an example to see how incredible this definition and theorem are by writing the following system of equations as a vector equation and a matrix equation.
\begin{align*}
\left\{\begin{array}{c}
8 x_{1}+x_{2}=2 \\
5 x_{1}-4 x_{2}=4 \\
x_{1}+3 x_{2}=1
\end{array}\right.
\end{align*}
\begin{equation}
\begin{array}{c|c|c}
\text { System of Equation } & \text { Vector Equation } & \text { Matrix Equation } \\
\hline\left\{\begin{array}{c}
8 x_1+x_2=2 \\
5 x_1-4 x_2=4 \\
x_1+3 x_2=1
\end{array}\right. & x_1\left(\begin{array}{c}
8 \\
5 \\
1
\end{array}\right)+x_2\left(\begin{array}{c}
1 \\
-4 \\
3
\end{array}\right)=\left(\begin{array}{c}
2 \\
4 \\
1
\end{array}\right) & \left(\begin{array}{cc}
8 & 1 \\
5 & -4 \\
1 & 3
\end{array}\right)\left(\begin{array}{l}
x_1 \\
x_2
\end{array}\right)=\left(\begin{array}{l}
2 \\
4 \\
1
\end{array}\right)
\end{array}
\end{equation}
Great. Now let’s create an augmented matrix and solve the system using row reduction.
\begin{equation}
\left[\begin{array}{ccc}
8 & 1 & 2 \\
5 & -4 & 4 \\
1 & 3 & 1
\end{array}\right] \underset{R R E F}{\sim}\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right]
\end{equation}
Uh oh! Because a pivot (1) is located in the ” \(b\) ‘ \(s\) ” this system is inconsistent, yielding no solution.
Vector Span and Matrix Equations
So, this means that the matrix equation \(A \vec{x}=\vec{b}\) has a solution if and only if \(\vec{b}\) is a linear combination of the columns of \(\mathrm{A}\).
And the existence of solutions theorem states that a system is consistent if the columns of A \(\operatorname{Span} \mathbb{R}^{m}\)
In other words, for each \(\mathrm{b}\) in \(\mathbb{R}^{m}\) is a linear combination of the columns of \(\mathrm{A}\), when the coefficient matrix A has a pivot position in every row.
So, if we are asked to determine if \(\mathrm{b}\) is in the span, all we have to do is verify that \(A \vec{x}=\vec{b}\) is consistent.
Together we will learn how to write a matrix equation as a vector equation and vice versa, solve the matrix equation and write the solution as a vector, determine if a vector is in span, and discover how to ‘fix’ an equation to describe a set that would yield a consistent solution.
Solving matrix equations is fun – let’s jump right in!
Video Tutorial w/ Full Lesson & Detailed Examples
Get access to all the courses and over 450 HD videos with your subscription
Monthly and Yearly Plans Available
Still wondering if CalcWorkshop is right for you?
Take a Tour and find out how a membership can take the struggle out of learning math.