11. Bilinear spaces

Part 1: problems, solutions.
Part 2: problems, solutions.
Part 3: problems, solutions.

Bilinear forms

A bilinear form is a function h\colon V\times V\to K, where V is a vector space over K, such that for any u,v,w\in V and a,b\in K,

    \[h(av+bw, u)=ah(v,u)+bh(w,u)\]

and

    \[h(u,av+bw)=ah(u,v)+bh(u,w).\]

Matrix of a bilinear form

If \mathcal{A}=\{v_1,\ldots, v_n\} is a basis of V, then

    \[G(h,\mathcal{A})=[h(v_i,v_j)]_{1\leq i,j\leq n}\in M_{n\times n}\]

is a matrix of h with respect to \mathcal{A}.

Notice that

    \[[x_1,\ldots, x_n]\cdot G(h,\mathcal{A})\cdot \left[\begin{array}{c}y_1\\\ldots\\y_n\end{array}\right]=h(v,w),\]

where (x_1,\ldots, x_n) and(y_1,\ldots, y_n) are coordinates of v,w respectively in \mathcal{A}.

Congruent matrices

Since (AB)^T=A^TB^T, one can easily notice that if \mathcal{A} and \mathcal{B} are bases and (x_1,\ldots, x_n) are coordinates of v in \mathcal{B}, then

    \[[x_1,\ldots, x_n]\cdot M(id)_{\mathcal{B}}^{\mathcal{A}}^T\]

gives the coordinates of this vector in \mathcal{A} written horizontally. Thus,

    \[M(id)_{\mathcal{B}}^{\mathcal{A}}\cdot M(h,\mathcal{A})\cdot M(id)_{\mathcal{B}}^{\mathcal{A}}= M(h,\mathcal{B}).\]

Which motivates the following definition. Matrices A and B if and only if there exists a reversible matrix C, such that B=C^T A C. In other words A and B are congruent if they are matrices of the same bilinear form in two different bases.

It is easy to notice that if two matrices are congruent, they have the same rank, and one of them is invertible if and only if the second is. A form h is nondegenerate if its matrix (in any basis) is invertible.

A form h is symmetrical if and only if h(v,w)=h(w,v) for any vectors v,w. Notice that if A is a matrix of such a form in any basis, then A is a symmetrical matrix, i.e. A=A^T.

Bilinear spaces

A vector space V over K along with a symmetric bilinear form h\colon V\times V\to K is called a bilinear (or orthogonal) space. We shall say that a space or a subspace is nondegenerate, if the form on this space is nondegenerate. In other words, a subspace W is degenerate, if and only if there exists a non-zero vector w\in W, such that for all v\in W, h(v,w)=0. A subspace W is completely degenerate v,w\in W, h(v,w)=0.

The rank of a bilinear space is the rank of its bilinear form.

Perpendicularity

We shall say that v\in V is perpendicular to w\in V (v\bot w), if h(v,w)=0. A vector perpendicular to itself is called a null vector.

If W is a subspace, then

    \[W^\bot=\{v\in V\colon \forall_{w\in W} v\bot w\}\]

is also a subspace.

The following theorem plays an important role: W is a nondegenerate subspace of a bilinear space (V,h) if and only if W\oplus W^\bot = V. Indeed, if W\oplus W^\bot = V, then in particular W\cap W^\bot =\{0\}, but if W is degenerate then there exists w\in W, such that for any v\in W, h(v,w)=0, so w\in W\cap W^\bot. Reversly, we have to prove that for every v there exists uniquely a decomposition v=w+w', where w\in W and w\in W^\bot if W is nondegenerate. If w_1,\ldots, w_k is a basis of W, then the existence of such a decomposition is equivalent to existence and uniqueness of coefficients x_1,\ldots, x_k such that v-(x_1w_1+\ldots+w_kv_k)\in W^T, which gives exactly a system of equations of form

    \[h(v_i,v_1)x_1+\ldots+h(v_i,v_k)x_k=h(v_i,v),\]

but the matrix of this system of equation is exactly the matrix of h|_W and if it is invertible (which means that h is nondegenerate), means that there exists a unique solution to this system of equations.

Orthogonal bases

Similarly as in the case of euclidean spaces, a basis v_1,\ldots, v_n of V is orthogonal (or perpendicular), if for i\neq j, v_i\bot v_j.

If K is a field of character not equal to 2, then an orthogonal basis exists (over \mathbb{Z}_2 it may not exist!). The proof is inductive and shows the procedure to find such a basis. If there exists a non-null vector v then we take it to the basis. By the above theorem if W=\lin(v), then W\oplus W^\bot=V, because obviously W is nondegenerate. So we can then recursively find an orthogonal basis of W^\bot. If on the other hand we encounter a space such that there is no more non-null vectors, then any basis is orthogonal (every two vectors are perpendicular). Indeed, 0=h(v+w,v+w)=h(v,v)+2h(v,w)+h(w,w)=0+2h(v,w)+0, so h(v,w)=0 (here we make use of the fact that we are not over \mathbb{Z}_2, so 2\neq 0).

Diagonalization

Notice also, that the matrix of a given form in an orthogonal basis is diagonal, so to find such a matrix which is diagonal and congruent to a given symmetric matrix, it is enough to find an orthogonal basis of the form definied by this matrix. We then get zeroes except for the diagonal on which we get h(v_i,v_i) for subsequent vectors of the orthogonal basis.

Notice also that if K=\R, then we can divide every non-null vector v_i from the orthogonal basis by \sqrt{|h(v_i,v_i)|}, i.e. we may consider vectors w_i=v_i/\sqrt{|h(v_i,v_i)|} (for non-null v_i, and w_i=v_i, if v_i is null). Then for non-null vectors we get:

    \[h(w_i,w_i)=\frac{h(v_i,v_i)}{|h(v_i,v_i)|}=\pm 1,\]

and the sign is the same as the sign of h(v_i,v_i). In other words, the matrix of h with respect to \{w_1,\ldots, w_n\} has only 1, -1 and 0 on the diagonal. This means, that every real symmetrical matrix is congruent over reals to a matrix with only 1, -1 and 0 on the diagonal. If A is a symmetrical matrix, then the number of 1 in such a congruent diagonal matrix is denoted by r_+(A), the number of -1 is denoted by r_-(A) and the signature of A is s(a)=r_+(A)-r_-(A). Notice that two symmetrical matrices A and B are congruent over reals if and only if r(A)=r(B) and s(A)=s(B).

Over complex numbers, it is even easier, because we can divide by \sqrt{h(v_i,v_i)} without the absolute value. Thus, for all non-null vectors we get h(w_i,w_i)=1. This means that the matrix of h with respect to \{w_1,\ldots, w_n\} has only 1 and 0 on the diagonal. In other words, every symmetrical matrix is congruent over complex numbers to a diagonal matrix with only 1 and 0 on the diagonal. Therefore, two symmetrical matrices A and B are congruent over complex numbers if and only if r(A)=r(B).

Congruent diagonal matrices and similar diagonal matrices

What is the relation between finding a diagonal congruent matrix (i.e. D=C^T A C) and finding a diagonal similar matrix (i.e. D=C^{-1}AC)? There is a relation!

Recall that a symmetric matrix A can be not only understood as a matrix of a bilinear form, but also as a matrix of a self-adjoint endomorphism \varphi (with respect to the standard inner product in \mathbb{R}^n). Moreover, we know that such endomorphisms have an orthonormal basis \oA consisting of eigenvectors, i.e. such that C=M(id)_{\mathcal{A}}^{st} is orthogonal, meaning C^{-1}=C^T. Thus, the diagonal matrix D=M(\varphi)_{\mathcal{A}}^{\mathcal{A}} has the property that D=C^{-1}AC=C^TAC, so matrices A and D are not only similar but also congruent! This means, that the diagonal matrix consisting of eigenvalues of the matrix A (with appropriate fold) is congruent to A (notice that this is the only similar diagonal matrix up to the order of elements on the diagonal but not the only such congruent matrix). On the other hand, such basis which is orthonormal with respect to the standard inner product is also orthogonal with respect to the form defined by A, which yields a new method of finding an orthogonal basis for a given form.