If you have tried to imagine a linear map of a plane, you usually imagine that it stretches or squeezes the plane along some directions. It would be nice to know whether a given linear map (such a map, which the same domain and range, is called an endomorphism), it is actually of this type. In other words we would like to know whether there exists a non-zero vector and a scalar , such that simply multiples by (so it stretches or squeezes the space in the direction of ), so:
If and have such properties, then is said to be an eigenvector of and an eigenvalue.
Notice that if is an eigenvalue of a map and is its eigenvector, then . Therefore if is a matrix of (in standard basis), then
, where is the identity matrix.
Since multiplication of a matrix by a vector gives a linear combination of its columns and is a non-zero vector, we see that the columns of can be non-trivially combined to get the zero vector! It is possible if and only if .
How to find the eigenvalues of a map? Simply one needs to solve the equation . E.g let . Then:
So we have to solve the following:
And therefore the eigenvalues are: and .
Now let’s find eigenvectors related to subsequent eigenvalues. Notice that since is a linear map, if are eigenvectors for an eigenvalue , then for any scalar also and are eigenvectors for . Therefore, the set of all eigenvectors for forms a linear subspace. Notice that satisfies the equation
so the space of eigenvectors (i.e. eigenspace) for (denoted as ) is given by the following system of equations:
and we can easily find its basis.
In our example, let us find a basis of , so let . Then:
Therefore, we have the following system of equations:
The space of solutions is , and its basis is . Indeed, and .
Let’s find a basis of , so let . Then:
The system of equations:
In the reduced ,,stair-like” form:
The space of solutions is , and its basis is . Indeed, .
If the sum of dimensions of spaces related to the eigenvalues of a given map equals the dimension of the whole space (as in our example: ), then the basis of the whole space which consists of the vectors from the bases of subspaces related to the eigenvalues is called an eigenvector basis (in our case: ).
If a map has an eigenvector basis, then it can be actually described by means of squeezing and stretching in the directions of eigenvectors. Notice that the matrix of such a map in an eigenvector basis is an diagonal matrix (has non-zero elements only on its diagonal) with eigenvalues related to subsequent eigenvectors on its diagonal. In our example:
It may happen that a map has no eigenvectors (e.g. a rotation of the plane) or that the subspaces of eigenvectors are to small (e.g. a 10 degree rotation of a three-dimensional space around an axis had only one-dimensional space of eigenvectors).
Diagonalization of a matrix
A matrix is diagonalizable, if there exists a matrix , such that:
where is a diagonal matrix.
How to check it and diagonalize a matrix if it is possible? Simply consider a linear map such that is its matrix in standard basis. Matrix is diagonalizable, if and only if has eigenvector basis . Then:
and , since .
E.g. we know that
is diagonalizable, since related to this matrix has an eigenvector basis. furthermore in this case:
Calculating diagonalizable matrices
Diagonalization of a matrix has the following interesting matrix. We can use it to calculate a power of a matrix, if it is diagonalizable. Notice that if is a basis, then:
but if is a basis of eigenvectors, then is a diagonal matrix so calculating its power is simply calculating powers of the elements on the diagonal.
Let us show it on an example. Let us calculate: