Eigenvalues and Eigenvectors are useful throughout pure and applied mathematics and are used to study differential equations and continuous dynamical systems.
In fact, eigenvalues provide essential information in engineering design and machine learning AI technology and arise naturally in such fields as physics and chemistry.
And we can see its uses in the frequencies used in electrical systems, oscillations and frequencies for structures such as bridges and buildings, and the tuning of instruments to make things more pleasant to the ear or dampen noise and vibrations.
So, what is an eigenvalue and an eigenvector anyway?
Before we get to the formal definition, let’s do some exploring.
Understanding Eigenvalues and Eigenvectors with an Example
Suppose you live at the origin \((0,0)\) in a two-dimensional world, and your friend, Liam, lives at the ordered pair \((4,4)\) and your other friend, Evelyn, lives at the ordered pair \((8,8)\).
To get to Liam’s house, you must travel 4 units to the right along the \(\mathrm{x}\)-axis and 4 units up along the y-axis. This can be represented as the \(2 \mathrm{D}\) vector \(<4,4>\).
Now, if you and Liam then want to visit Evelyn, you must travel another 4 units to the right along the \(x\)-axis and 4 units up along the \(y\)-axis. This \(2 \mathrm{D}\) vector can be represented as \(<4,4>\) as well.
But observe how these vectors are related.
The second vector from Liam’s house to Evelyn’s house first vector is found by scaling the first vector from your house to Liam’s house by 2. As \(2<4,4>=<8,8>\)
Notice that all three houses are collinear (on the same line), and their distances are just scalar multiples of each other. Hence, after a transformation (i.e., walking/translating from one house to another), the direction of the vector remains unchanged. Just the length (magnitude) is different.
The scaling is the eigenvalue, and the resulting vector is called the eigenvector.
Formal Definitions of Eigenvalues and Eigenvectors
So, an eigenvector of an \(n \times n\) matrix A is a nonzero vector \(\vec{x}\) such that \(A \vec{x}=\lambda \vec{x}\) for any scalar \(\lambda\) . A scalar \(\lambda\) is called an eigenvalue of \(\mathrm{A}\) if there is a nontrivial solution \(\vec{x}\) of \(A \vec{x}=\lambda \vec{x}\) such that an \(\vec{x}\) is called an eigenvector corresponding to \(\lambda\).
Key Takeaways and Fun Facts
Here are three big takeaways from this important definition.
First, eigenvalues and eigenvectors only apply to square matrices.
Second, eigenvectors cannot be the zero vector, as they are defined as nonzero, but eigenvalues which are scalars can be any real number, including zero.
Third, an eigenvector of \(\mathrm{A}\) is a vector \(\vec{x}\) such at \(A \vec{x}\) is collinear with \(\vec{x}\) and the origin, as \(A \vec{x}=\lambda \vec{x}\) can be written as \(A \vec{x}-\lambda \vec{x}=\overrightarrow{0}\). And if \(A \vec{x}\) and \(\lambda \vec{x}\) are collinear with the origin, then \(A \vec{x}\) and \(\vec{x}\) must lie on the same line through the origin. Which indicates that \(A \vec{x}\) is a scalar multiple of \(\vec{x}\) by a scale of \(\lambda\).
And here’s a fun fact: In German, the prefix “eigen” means “self” or “characteristic.” Thus, an eigenvector is a vector that is a multiple of itself by the matrix transformation \(A \vec{x}\). It is also called a characteristic vector, as it represents a “characteristic” of A.
Practical Examples: Verifying Eigenvectors and Finding Eigenvalues
Okay, so let’s put all this information into action with a few examples.
Let \(A=\left[\begin{array}{cc}3 & -2 \\ 1 & 0\end{array}\right], \vec{u}=\left[\begin{array}{c}2 \\ 1\end{array}\right]\) and \(\vec{v}=\left[\begin{array}{c}-1 \\ 1\end{array}\right]\). Are \(\mathrm{u}\) and \(\mathrm{v}\) eigenvectors of \(\mathrm{A}\) ? Well, knowing that an eigenvector maintains the equation \(A \vec{x}=\lambda \vec{x}\), all we have to do is multiply \(A\) and \(u\) and then \(A\) and \(v\) separately and see if they are multiples of \(u\) and \(v\).
\begin{aligned}
A u & \stackrel{?}{=} \lambda u \\
\underbrace{\left[\begin{array}{cc}
3 & -2 \\
1 & 0
\end{array}\right]}_{A} \underbrace{\left[\begin{array}{l}
2 \\
1
\end{array}\right]}_{u} & =\left[\begin{array}{l}
4 \\
2
\end{array}\right]=\underbrace{2}_{\lambda} \underbrace{\left[\begin{array}{l}
2 \\
1
\end{array}\right]}_{u}
\end{aligned}
Yes, this means that \(u\) is an eigenvector corresponding to an eigenvalue of 2.
Now it’s time to check if \(\mathrm{v}\) is an eigenvector of matrix \(A\).
\begin{aligned}
A v & \stackrel{?}{=} \lambda v \\
\underbrace{\left[\begin{array}{cc}
3 & -2 \\
1 & 0
\end{array}\right]}_{A} \underbrace{\left[\begin{array}{c}
-1 \\
1
\end{array}\right]}_{v} & =\left[\begin{array}{l}
-5 \\
-1
\end{array}\right]
\end{aligned}
Sadly, Av does not yield a multiple of \(\mathrm{v}\), we can conclude that \(\mathrm{v}\) is not an eigenvector of A
Alright, so when we’re given a vector, we can verify whether or not it’s an eigenvector and find its eigenvalue, as the example above nicely shows.
Finding Eigenvectors Given an Eigenvalue
But what if we aren’t given an eigenvalue and are asked to find the eigenvector?
For example, let’s show that 7 is an eigenvalue of \(A=\left[\begin{array}{ll}1 & 6 \\ 5 & 2\end{array}\right]\) and find its corresponding eigenvector.
First, we will transform our formula \(A \vec{x}=\lambda \vec{x}\):
\begin{aligned}
A \vec{x} &= \lambda \vec{x} \\
A \vec{x} – \lambda \vec{x} &= \overrightarrow{0} \\
(A-\lambda I) \vec{x} &= \overrightarrow{0}
\end{aligned}
Now let’s make our substitution for \(\mathrm{A}\) and lambda:
\begin{aligned}
A-\lambda I &= \left[\begin{array}{ll}
1 & 6 \\
5 & 2
\end{array}\right]-7\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right] \\
&= \left[\begin{array}{ll}
1 & 6 \\
5 & 2
\end{array}\right]-\left[\begin{array}{ll}
7 & 0 \\
0 & 7
\end{array}\right] \\
&= \left[\begin{array}{cc}
-6 & 6 \\
5 & -5
\end{array}\right]
\end{aligned}
Next, we solve the equation for \(\vec{x}\) by augmenting with the zero vector and row reducing.
\begin{aligned}
\left[\begin{array}{ccc}
-6 & 6 & 0 \\
5 & -5 & 0
\end{array}\right] & \sim \left[\begin{array}{ccc}
1 & -1 & 0 \\
0 & 0 & 0
\end{array}\right] \\
x_{1} & = x_{2} \\
x_{2} & = x_{2} \\
\vec{x} & = x_{2}\left[\begin{array}{l}
1 \\
1
\end{array}\right]
\end{aligned}
So, every vector of the form \(\vec{x}=x_{2}\left[\begin{array}{l}1 \\ 1\end{array}\right]\) where \(x_{2} \neq 0\) is an eigenvector resultant from the eigenvalue 7.
Introduction to Eigenspaces
In fact, what we have just found is the basis for the corresponding eigenspace.
We’ve just observed that finding the eigenvectors for a given eigenvalue involves solving a homogeneous system of equations. And the subspace \(\operatorname{Nul}(A-\lambda I)\) is the eigenspace of \(\mathrm{A}\) corresponding to \(\lambda\).
Cool, right?
Next Steps
In this lesson, you will:
- Examine definitions, theorems, and properties
- Learn how to find corresponding eigenvalues given an eigenvector
- Discover how to find an eigenvector given its corresponding eigenvalue
- Find a basis for the corresponding eigenspace
Jump right in and start exploring!
Video Tutorial w/ Full Lesson & Detailed Examples
Get access to all the courses and over 450 HD videos with your subscription
Monthly and Yearly Plans Available
Still wondering if CalcWorkshop is right for you?
Take a Tour and find out how a membership can take the struggle out of learning math.