10. Isometries, self-adjoint mappings and unitary spaces

Part 1: problems, solutions.
Part 2: problems, solutions.

Isometries of linear euclidean spaces

A linear mapping \varphi\colon V_1\to V_2, where \langle V_1, \langle\cdot,\cdot \rangle_1\rangle, \langle V_2, \langle\cdot,\cdot \rangle_2\rangle are linear euclidean spaces is an isometry if the following equivalent conditions hold

  • is a linear isomorphism and preserves the inner product

        \[\forall_{v,w\in V} \langle v,w\rangle_1=\langle \varphi(v),\varphi(w)\rangle_2,\]

  • is a linear isomorphism and preserves the lengths of vectors

        \[\forall_{v\in V} \|v\|_1=\|\varphi(v)\|_2,\]

  • takes an orthonormal basis of V_1 onto a orthonormal basis of V_2,
  • takes any orthonormal basis of V_1 onto a orthonormal basis of V_2,

Orthogonal matrices

A matrix A is orthogonal if A\cdot A^T=I.

In other words, a matrix is orthogonal if its rows (equivalently columns) form an orthonormal basis with respect to the standard inner product.

Orthogonal matrices are invertible and A^T=A^{-1} for such matrices.

Notice that \varphi\colon V_1\to V_2 is an isometry if and only if M(\varphi)_{\mathcal{A}}^{\mathcal{B}} is orthogonal for some (equivalently, any) orthogonal bases \mathcal{A}, \mathcal{B} of respectively spaces V_1 and V_2.

Rotations and perpendicular reflection

If \varphi\colon V\to V is a perpendicular reflection across W, and if \mathcal{A} consists of an orthonormal basis of W after which we have an orthonormal basis of W^\bot we get

    \[M(\varphi)_{\mathcal{A}}^{\mathcal{A}}=\left[\begin{array}{cccccccc}1&0&\ldots &0&0&0&\ldots&0\\0&1&\ldots &0&0&0&\ldots&0\\&\ldots &&\ldots &&\ldots&&\ldots \\0&0&\ldots&1&0&0&\ldots&0\\0&0&\ldots&0&-1&0&\ldots&0\\ 0&0&\ldots&0&0&-1&\ldots&0\\&\ldots &&\ldots &&\ldots&&\ldots \\0&0&\ldots&0&0&0&\ldots&-1\end{array}\right].\]

Obviously a perpendicular reflection is an isometry.

If W is a two-dimensional subspace of V, then the rotation around W^{\bot} by \alpha has the following matrix

    \[M(\varphi)_{\mathcal{A}}^{\mathcal{A}}=\left[\begin{array}{ccccc}\cos \alpha&-\sin \alpha&0&\ldots &0\\\sin\alpha&\cos \alpha&0&\ldots&0\\0&0&1&\ldots&0\\&&\ldots&&\ldots&\\0&0&0&\ldots&1\end{array}\right]\]

where \mathcal{A} consists of an orthonormal basis of W after which we have an orthonormal basis of W^\bot. A rotation is an isometry.

It is easy to show that every isometry of a linear euclidean two-dimensional space is either a perpendicular reflection or a rotation.

Isometries of affine euclidean spaces

A mapping f\colon H\to M on affine euclidean spaces is called an isometry if one of the following equivalent conditions hold

  • f is an affine isomorphism and f'\colon T(H)\to T(M) is a isometry of linear spaces,
  • f is an affine mapping and f' is orthogonal in orthonormal bases of T(H) and T(M),

One can prove that f\colon H\to H is an isometry iff it preserves the distance between points , i.e. d(p,q)=d(f(p),f(q)) for any p,q\in H.

Mapping preserving measure of parallelepipeds

Notice that if R\subseteq H is a n-dimensional parallelepiped and f\colon H\to M is an affine isomorphism, then f[R] is also a n-dimensional parallelepiped.

Moreover, if R is a n-dimensional parallelepiped in a n-dimensional space, then \mu_n(f[R])=|\det f'|\cdot \mu_n(R). Indeed, if R=R(p_0;v_1,\ldots, v_n), then v_1,\ldots,v_n is a basis \mathcal{A}, and f'(v_1),\ldots, f'(v_n) is a basis \mathcal{A}'. Then for an orthonormal basis \mathcal{B}, we get

    \[\mu_n(f[R])=\sqrt{W(f'(v_1),\ldots, f'(v_n))}=|\det M(id)_{\mathcal{A}'}^{\mathcal{B}}|=|\det M(f)_{\mathcal{A}}^{\mathcal{B}}|=\]

    \[=|\det M(f)_{\mathcal{B}'}^{\mathcal{B}} M(id)_{\mathcal{A}}^{\mathcal{B}}|=|\det f'|\sqrt{W(v_1,\ldots, v_n)}=|\det f'|\cdot\mu_n(R).\]

Thus, an affine isomorphism f on n-dimensional affine euclidean spaces preserves measure of n-dimensional parallelepipeds if and only if \det f'=\pm 1.

Self-adjoint mappings

An endomorphism of a linear Euclidean space \varphi\colon V\to V is self-adjoint, if for any v,w\in V, \langle v,\varphi(w)\rangle = \langle \varphi(v),w\rangle.

Notice that then if \mathcal{A} is an orthonormal basis of V, the matrix M(\varphi)_{\mathcal{A}}^{\mathcal{A}} is symmetrical! Indeed, it is so because \langle v_i, \varphi(v_j)\rangle is the i-th coordinate of \varphi(v_j) with respect to \mathcal{A}. Reversly, if for an orthonormal basis \mathcal{A}, matrix M(\varphi)_{\mathcal{A}}^{\mathcal{A}} is symmetrical, then \varphi is self-adjoint.

It is easy to notice, that if v is an eigenvector and w\bot v, then \varphi(w)\bot v. Moreover, if v,w are eigenvectors for different eigenvalues, then v\bot w. Indeed if those values are a\neq b, then

    \[b\langle v,w\rangle =\langle v,bw\rangle=\langle av,w\rangle = a\langle v,w\rangle,\]

so \langle v,w\rangle= 0.

Also it can be proved every eigenvalue of a symmetrical matrix is real! Moreover, by an easy inductive argument using the fact of preserving perpendicularity to an eigenvector, we can prove that every symmetrical matrix (thus every self-adjoint mapping) has a basis consisting of eigenvectors. This basis is orthogonal, and can be made orthonormal. In other words, for every symmetrical matrix A there exists an orthogonal matrix C, such that C^TAC is diagonal!

Sesquilinear and hermitian forms

The definition of the inner product which was introduced so far considers only spaces over the real numbers. Let us generalize it to the case of complex numbers. Let every space considered here be a space over complex numbers. We have to notice the following rule. The distance of a real number a from zero equals a^2. Meanwhile the distance of a complex number a from zero is a\cdot \overline{a}. This is the idea of the following definition and all subsequent definitions.

We shall say that a form \xi\colon V\times V\to\mathbb{C} is sesquilinear, if

    \[\xi(au+bv,w)=a\xi(u,w)+b\xi(v,w)\]

and

    \[\xi(w,au+bv)=\overline{a}\xi(w,u)+\overline{b}\xi(w,v),\]

for any u,v,w\in V, a,b\in\mathbb{C}.

The matrix of a sesquilinear form \xi\colon V\times V\to\mathbb{C} with respect to basis (v_1,\ldots,v_n) of V is the matrix [a_{i,j}]_{1\leq i,j\leq n}=A\in M_{n\times n}(\mathbb{C}) such that

    \[\xi(v,w)=\sum_{i,j=1}^n a_{ij}x_i\overline{y}_j,\]

where v,w\in V are any vectors and (x_1,\ldots, x_n) and (y_1,\ldots, y_n) are their coordinates in basis (v_1,\ldots,v_n) .

A sesquilinear form \xi\colon V\times V\to\mathbb{C} is hermitian if

    \[\xi(v,w)=\overline{\xi(w,v)},\]

for any v,w\in V.

Hermitian inner products and unitary spaces

A hermitian form \xi\colon V\times V\to\mathbb{C} is a hermitian inner product if

    \[\xi(v,v)>0\]

for any non zero v\in V (observe that it is always a real number).

A space V over \mathbb{C} along with a fixed hermitian inner product is called an unitary space.

The standard hermitian inner product in C^n is defined by the following formula

    \[\langle (a_1,\ldots, a_n),(b_1,\ldots, b_n)\rangle = a_1\overline{b_1}+\ldots+ a_n\overline{b_n}.\]

Similarly as in the case of euclidean spaces one can define the norm, perpendicularity, orthogonal and orthonormal bases.

Hermitian matrices

A matrix A\in M_{n\times n}(\mathbb{C}) is hermitian if A=\overline{A}^T. One can notice that a matrix of a sesquilinear form is hermitian if and only if the form is hermitian.

Mappings of unitary spaces and unitary matrices

The counterpart of an isometry is an isomorphism of unitary spaces, i.e. a linear isomorphism which preserves the hermitian product (unitary isomorphism).

It is easy to prove that a linear isomorphism of unitary spaces is unitary if and only if its matrix A with respect to some orthonormal basis is such that A^T\cdot \overline{A}=I. Such matrices are called unitary matrices.

Diagonalization of endomorphisms of an unitary space

Similarly as in the case of a space over reals, we may notice that every self-adjoint endomorphism of an unitary space has an orthonormal basis consisting of eigenvectors, and moreover the eigenvalues are real.

Another example of an mapping which is diagonalizable in an orthonormal way is an unitary automorphism. Indeed, let us prove it by induction. For one-dimensional space it is obvious. Let as assume that it is so for n-1-dimensional spaces. Let \dim V=n and let \varphi be an unitary automorphism. There exists an eigenvector of \varphi, v for eigenvalue a (a\neq 0, since it is an automorphism), thus the space (\lin(v))^{\bot} is n-1-dimensional and invariant under \varphi, because if w\bot v, then

    \[\langle v,\varphi(w)\rangle = \frac{1}{a}\langle \varphi(v),\varphi(v)\rangle=\frac{\langle v,w\rangle }{a}=0.\]

We then take the basis from the induction hypothesis and add v/\|v\|.