2. Jordan Theorem

Part 1.: problems, solutions.
Part 2.: problems, solutions.

Jordan form

Not every matrix can be diagonalized. The Jordan form of a matrix is a sensible generalization of a diagonal matrix, such that (nearly) every matrix can be transformed to it.

Instead of single numbers on the diagonals, in the Jordan form, we allow there matrices called Jordan blocks. A Jordan block is a matrix which has a fixed number on the diagonal, and ones immediately above the diagonal, e.g.


So this is an example of a matrix in the Jordan form:


Jordan theorem

Jordan theorem states that for every matrix such that its characteristic polynomial can be decomposed into expressions of degree one, there exists a similar matrix which is in Jordan form. In particular, it is true over an algebraically closed field, e.g over \mathbb{C}.

Translating it to the language of linear mappings \varphi\colon V\to V, for every mapping, the characteristic polynomial of which can be decomposed into expressions of degree one, there exists a basis \mathcal{A}=\{v_1,v_2,\ldots,v_n\} (called a Jordan basis), such that the matrix of this mapping with respect to this basis is in Jordan form.

In particular, it means, that if i_1,\ldots, i_r are the numbers of columns in which the Jordan blocks start, then vectors v_{i_1}, \ldots, v_{i_r} are eigenvectors for eigenvalues a_1,\ldots, a_r, and the rest of the vectors (for i>i_j, but less than i_{j+1} or \leq n in the last block) are such that \varphi(v_i)=a_jv_i+ v_{i-1} (it is implied by the matrix, the one immediately over the diagonal is responsible for adding the previous vector). Vectors v_{i_1}, \ldots, v_{i_n} are the first in their blocks, and v_{i_2-1}, \ldots, v_{i_n-1}, v_n are the last in their blocks.

Jordan basis

Let us think how to find a Jordan basis (and prove Jordan Theorem). We are going to do it recursively.

  1. Find at least one eigenvalue a of mapping \varphi.
  2. Consider a mapping \psi=\varphi-a\cdot id. Notice that:
    • a Jordan basis for \psi is also a Jordan basis for \varphi. Indeed, if \psi(v_i) =a_j v_i, then \varphi(v_i)= (a_j+a) v_i, but if \psi(v_i) =a_j v_i+v_{i-1}, then \varphi(v_i)= (a_j+a) v_i+v_{i-1}, thus instead of looking for a basis for \varphi we may look for a basis for \psi.
    • zero is an eigenvalue of \psi, because if v is an eigenvector of \varphi for eigenvalue a, then \psi(v)= av-av=0. Thus, the dimension of the kernel of \psi is greater than zero.
    • Therefore, the dimension of the image is less than n, where n is the dimension of the whole space.
  3. Find (recursively) a Jordan basis of \psi|_{im\psi}\colon im\psi\to im\psi defined on a space with less dimensions than the original one (obviously, if the space has dimension 1, then any vector is an eigenvector so it is a Jordan basis). Let this basis be v_1,\ldots, v_k. This requires finding the image of \psi first.
  4. Now, in subsequent steps we are going to add new vectors to this basis. Some of the vectors v_1,\ldots, v_k are eigenvectors for wigenvalue zero. Let us denote the indices of those vectors by i_1, \ldots, i_s. Those vectors are important, because:
    • they are in the kernel, which is the eigenspace for eigenvalue zero, so they are a basis of the kernel of \psi|_{im \psi}, i.e. ker\psi\cap im\psi, in particular the dimension of ker\psi\cap im\psi equals s,
    • are the starting vectors of some Jordan blocks for \psi|_{im \psi}. Let the ending vectors of those blocks be v_{j_1},\ldots, v_{j_s}.
  5. The dimension of the kernel is obviously n-k. Complete the basis of v_{i_1}, \ldots, v_{i_s} of ker\psi\cap im\psi to a basis of the whole kernel z_1,\ldots, z_{n-k-s}. Since those are vectors from the kernel, they are eigenvectors for 0, we can take them to our Jordan basis (e.g. at its end).
  6. Thus, we already have k+n-k-s vectors. We need s more vectors. The idea is that we will extend s blocks starting with v_{i_1},\ldots, v_{i_s}, and ending with v_{j_1},\ldots, v_{j_s} by one vector. Actually, we have to insert after each v_{j_1},\ldots, v_{j_s} vectors w_{l}, for l=1,\ldots, s such that \psi (w_l)=v_{j_l}. It indeed extends each of those blocks, since \psi (w_l)=v_{j_l}+0\cdot w_l, and v_{j_l} is the previous vector, and 0 is the eigenvalue of those blocks. This is how we get the Jordan basis which we are looking for.

        \[v_1,\ldots, v_{i_1},\ldots, v_{j_1}, w_1,v_{j_1+1},\ldots, v_{i_s},\ldots, v_{j_s}, w_s, v_{j_s+1},\dlots, v_{k}, z_1,\ldots, z_{n-k-s}.\]

  7. Indeed it is a basis, because it turns out to be linearly independent. If

        \[a_1v_1+\ldots+ a_{i_1}v_{i_1}+\ldots+ a_{j_1}v_{j_1}+ b_1w_1+a_{j_1+1}v_{j_1+1}+\ldots+ a_{i_s}v_{i_s}+\ldots+ a_{j_s}v_{j_s}+ b_sw_s+ a_{j_s+1}v_{j_s+1}+\]

        \[\ldots+ a_{k}v_{k}+ c_{1}z_1+\ldots+ c_{n-k-s}z_{n-k-s}=0,\]


        \[\psi(a_1v_1+\ldots+ a_{i_1}v_{i_1}+\ldots+ a_{j_1}v_{j_1}+ b_1w_1+a_{j_1+1}v_{j_1+1}+\ldots+ a_{i_s}v_{i_s}+\ldots+ a_{j_s}v_{j_s}+ b_sw_s+ a_{j_s+1}v_{j_s+1}+\ldots+ a_{k}v_{k}+\]

        \[+ c_{1}z_1+\ldots+ c_{n-k-s}z_{n-k-s})=0,\]

    But \psi(c_{1}z_1+\ldots+ c_{n-k-s}z_{n-k-s})=0, but this implies that b_1=\ldots = b_s=0, because

        \[\psi(a_1v_1+\ldots+ a_{i_1}v_{i_1}+\ldots+ a_{j_1}v_{j_1}+a_{j_1+1}v_{j_1+1}+\]

        \[\ldots+ a_{i_s}v_{i_s}+\ldots+ a_{j_s}v_{j_s}+  a_{j_s+1}v_{j_s+1}+\ldots+ a_{k}v_{k}.\]

    is a linear combination of \{v_1,\ldots, v_k\}\setminus \{v_{j_1}\,\ldots, v_{j_s}\}, but
    \psi(b_1w_1+\ldots +b_s w_s)
    is a linear combination of \{v_{j_1}\,\ldots, v_{j_s}\}.

        \[a_1v_1+\ldots+ a_{i_1}v_{i_1}+\ldots+ a_{j_1}v_{j_1}+a_{j_1+1}v_{j_1+1}+\]

        \[+\ldots+ a_{i_s}v_{i_s}+\ldots+ a_{j_s}v_{j_s}+  a_{j_s+1}v_{j_s+1}+\dlots+ a_{k}v_{k}+ c_{1}z_1+\ldots+ c_{n-k-s}z_{n-k-s}=0,\]

    which implies that c_1=\ldots=c_{n-k-s}=0, because it follows that c_{1}z_1+\ldots+ c_{n-k-s}z_{n-k-s} is in the intersection of the kernel and the image and thus has to be the zero vector by definition of z_1,\ldots z_{n-k-s}. But then a_1=\ldots=a_k=0 because v_1,\ldots, v_k are linearly independent, which means that the system we are considering is actually linearly independent.

E.g. let us find the Jordan basis for


Let \varphi be a mapping defined by this matrix. We check its eigenvalues. w(\lambda)=(\lambda-2)^4, so the only eigenvalue is 2. Thus let

    \[\psi=\varphi -2 id.\]


    \[\psi((x,y,z,t))=(x-y,x-y,3x+3z-3t, 4x-y+3z-3t).\]

Consider \text{im} \psi. We get that

    \[\text{im} \psi = \text{lin}((1,1,3,4),(-1,-1,0,-1),(0,0,3,3),(0,0,-3,-3))=\]


But \psi((1,1,0,1))=(0,0,0,0) and \psi((0,0,1,1))=(0,0,0,0), and since \dim \ker \psi =4-2=2, we get that \text{im}\psi = \ker \psi and (v_1=(1,1,0,1),v_2=(0,0,1,1)) is a Jordan basis for \psi |_{\text{im} \psi}, because those vectors are eigenvectors for eigenvalue zero. Both are starting and ending vectors of their blocks. We have to find w_1 and w_2 such that \psi(w_1)=(1,1,0,1) and \psi (w_2)=(0,0,1,1). We can take e.g. w_1=(0,-1,0,0) and w_2=(0,0,1/3,0). Then, \mathcal{A}=((1,1,0,1),(0,-1,0,0),(0,0,1,1),(0,0,1,0)) is a Jordan basis both for \psi, and for \varphi, and




Powers of matrices in Jordan form

It is easy to notice that it is relatively easy to calculate powers of a matrix a Jordan form. The zeroes under the diagonal and outside of Jordan blocks are still there, and a power of a matrix A in a Jordan form has simply powers of its Jordan blocks on the diagonal.

You may notice that


    \[=\left[\begin{array}{cccccc}a^n&\binom{n}{1} a^{n-1}&\binom{n}{2} a^{n-2}&\binom{n}{3} a^{n-3}&\ldots&\binom{n}{k-1} a^{n-k+1}\\0&a^n&\binom{n}{1} a^{n-1}&\binom{n}{2} a^{n-2}&\ldots&\binom{n}{k-2} a^{n-k+2}\\0&0&a^n&\binom{n}{1} a^{n-1}&\ldots&\binom{n}{k-3} a^{n-k+3}\\&\ldots&&\ldots&&\ldots\\0&0&0&0&\ldots&a^n\end{array}\right].\]

In particular, it becomes easy if a=0, because that the n-th power of a Jordan block k\times k has only zeros except for the places n places over the diagonal, where there are ones (if n<k).

Generally, the rank of a n-th power of such a block equals k-n for n<k and zero otherwise. But the rank of the power of any other Jordan block is always equal to its size.

Finding Jordan form using powers

Notice that it means that the rank of Jordan blocks for eigenvalue zero decreases with taking higher and higher powers by one, until it reaches zero. This sensitivity (and the fact that the rank of a matrix does not depend on a basis) can be used to determine a similar matrix in the Jordan form without calculating a Jordan basis.


    \[q_{n}=r((A-aI)^{n-1})- r((A-aI)^{n})\]

equals to the number of Jordan blocks for eigenvalie a of size \geq n (because exactly those blocks decrease their rank by one if the power changes from n-1 to n).

Jordan form and similar matrices

Every matrix over an algebraically closed field has a similar matrix in Jordan form. Such a matrix is unique up to the order of blocks. This can be used to determine whether two matrices are similar. They are if and only if they have the same matrix in Jordan form (up to order of blocks).

Calculating powers of matrices

Similarly as in the case of diagonalizable matrices, we can use Jordan form to calculate a power of a matrix, if the characteristic polynomial of which can be decomposed.

E.g. let us calculate


We have


So the matrix in Jordan form is


because it is the only case for eigenvalue 1 different from the identity.
It is easy to notice that


It is clear that the eigenvector for 1 is (2,1), and (\varphi-id)(x,y)=(-2x+4y, -x+2y). Let us find v, such that (\varphi-id)(v)=(2,1). It is e.g. v=(5,3). Thus, \mathcal{A}=((2,1),(5,3)) is a Jordan basis.
We get M(id)_{\mathcal{A}}^{st}=\left[\begin{array}{cc}2&5\\1&3\end{array}\right]
and M(id)_{st}^{\mathcal{A}}=\left[\begin{array}{cc}3&-5\\-1&2\end{array}\right].


    \[\left[\begin{array}{cc}-1&4\\-1&3\end{array}\right]^{122}=\left[\begin{array}{cc}2&5\\1&3\end{array}\right]\cdot \left[\begin{array}{cc}1&122\\0&1\end{array}\right]\cdot \left[\begin{array}{cc}3&-5\\-1&2\end{array}\right]=\left[\begin{array}{cc}-243&448\\-122&245\end{array}\right].\]