# 7. Systems of linear differential equations and equations of higher order

Part 1.: Problems, solutions.
Part 2.: Problems.

### Systems of differential linear equations

A system of differential linear equations is a system of the following form

Additionally we may have boundary conditions: . A solution is a vector of functions , which satisfies the system (if there are no boundary conditions given one should find all such functions.)
and boundary conditions (then we shall get one such vector).

Such a system can be written in a matrix form as:

If , the system is called homogeneous.

is thus an example of a system of differential linear equations and

is homogeneous. are exemplary boundary conditions for this system.

### Eigenvectors and solutions to a homogeneous system of differential linear equations

Notice that if is an eigenvector for an eigenvalue of matrix of a homogeneous system of differential linear equations, then is its solution. Indeed,

by the properties of derivative, but also by the definition of an eigenvector,

This means, that is matrix can be diagonalized (i.e. there exists a basis consisting of eigenvectors) then every solution of such system is a combination of basic solutions obtained this way.

### Solving homogeneous system of differential linear equations — an example

Let us solve

We are looking for eigenvalues of matrix

Its characteristic polynomial is . So we get eigenvalues and .

First we calculate a basis consisting of eigenvectors. For the eigenspace is described by , so it is spanned by . For it is described by , and thus spanned by .

Thus we get basic solutions: i , and each solution is their combination:

so

### Complex eigenvalues

The non-real eigenvalues of a real matrix go in pairs , . Their eigenvectors are also conjunct. This is the reason why in the general solution (after changing to ) it is possible to extract and , where are the constants of combination of basic solutions related to and respectively. Since we are looking for real solution we can choose ,
in such a way that and (their imaginary parts are negative of each other, and they have the same real parts). Thus, we can introduce real constants and .

Equivalently, instead of basic solutions and one can take and .

### Non-homogeneous systems of differential linear equations

The idea is similar to the idea of solving single non-homogeneous equation. We first solve the homogeneous version of the system and then change the constants to functions.

E.g. to solve

first we solve

which gives the solution

(see the previous section).

Changing the constants to functions we get:

so the derivatives are:

by substitution we get to:

which gives the following system of equations :

so the matrix of this system is

Thus,

Całkując dostajemy

So finally:

### What if the matrix is not diagonalizable?

Our considerations so far assume that matrix of the system is diagonalizable (at least in the complex numbers). What if it is not the case? How to get the missing basic solutions?

Notice first, that if is an eigenvector for eigenvalue , then if there exists such that , i.e. satisfying the system

then
is such that

thus it is also a basic solution!

If we still need more basic solutions, we can find such that , i.e. a vector satisfying

and then is such that

thus it is also a basic solution! An so on!

It can be proved that if is a -fold eigenvalue with only one dimensional eigenspace with basis , then there exist vectors described above and giving the missing basic solutions. These vectors are called Jordan’s vectors or the series of generalized eigenvectors.

E.g. let

We are looking for eigenvalues of

and there is only one: . Solving the system of equations

we get one-dimensional eigenspace spanned by . Thus we find any Jordan’s vector satisfying

e.g. . We need one more:

e.g. . Thus we get basic solutions

and

### Linear differential equations of higher order

It is a linear equation with derivatives of higher order, e.g.: .

Every such equation can be transformed into a system of equations of first order by stipulating , , etc. Then:

This is the system solved in the previous section. We get a solution for from it by looking at the first coordinate, so the general solution of the considered equation is

# 14. Polynomials and hypersurfaces

Part 1.: Problems, solutions.
Part 2.: Problems.

### Polynomials and polynomial functions

A polynomial of degree over field with variables is an expression of form

with at least one for not equal to zero, and the space of all polynomials over with variables is denoted by . E.g. is a polynomial of degree in .

Given an affine -dimensional space and a basic system , a polynomial defines a function, called a polynomial function, given as

which obviously abuses the notation, since denotes the polynomial itself and the polynomial function. But for infinite fields it is not a problem,, because we have a bijection between these two sets.

Notice also, that if can be written as a polynomial function in a basic system, it can written in this form in any basic system. Moreover, the related polynomial is of the same degree.

### Hypersurfaces and algebraic sets

is an algebraic set, if

where are polynomial functions. It is a hypersurface, if

where is a polynomial funkction.

It is easy to notice that over it it the same thing, since we can take .

### Equivalence relation on polynomial functions and hypersurfaces

Two polynomial functions are equivalent, if there exist basic systems and , such that in is described by the same polynomial as in .

Two hypersurfaces are equivalent, if the functions describing them are equivalent. It is so if and only if the second hypersurface is an image of the first one under an affine isomorphism on .

### Canonical form of a polynomial function: hypersurfaces of second degree

Fix a hypersurface of second degree, i.e. described by a polynomial function of second degree. We want to find an equivalent polynomial function of a simplest possible form, i.e. in a canonical form. In other words, we will be looking for a basic system in which the equation describing the hypersurface is as simple as possible.

Every polynomial function of second degree is a sum of a quadratic form and a an affine function, i.e. e.g. if then , where and .

For every polynomial function of second degree (thus every equation describing a hypersurface of second degree) it is possible to find a basic system in which the function takes one of the following forms:

, or

, , where is the rank of .

How to transform the function to such a form? First we have to diagonalize the form and describe the function in the new basis. It changes the basis but not the origin of the basic system. Now, for each variable appearing with a non-zero coefficient in for which we have with a non-zero coefficient , we can introduce a new variable , because then . This changes the constant and the origin from to , where is the basic vector related to the variable . If there are no other variables in , we have already reached the first form. If there are other variables (for which there is no in ), then we make from them and the constant coefficient a new variable , which obviously change both the basis and the origin of the basic system.

where , , so , and for the basic system we get

And so

where and , so the final basic system is .

In the second case, let , and then:

where , , , and the basic system is .

where , and , and since

so the final basic system is: .

### Canonical form of a polynomial function of second degree: hypersurfaces over or

Notice that additionally an equation can be always divided by a free variable (if it is non-zero), changing it into and if we are over , for every expression the basic vector related to can be divided by which changes this expression to (and over even by changing this expression to ).

Thus for every equation of second degree over , there is a basic system in which the equation takes one of the following forms:

, lub

, lub

.

Thus for every equation of second degree over , there is a basic system in which the equation takes one of the following forms:

, lub

, lub

.

E.g. reformulating further on the equation is basic system , we see that it is equivalent to , i.e. with equation , which is in basic system (because ). Meanwhile, over it is equivalent to , in basic system , (because this time ).

### Centres of symmetry

A point is a centre of symmetry of a hypersurface described by equation , if for every we have if and only if . It is easy to prove that is a centre of symmetry if and only if it is a critical point of the function , i.e. its partial derivatives are zero at these points.

Thus, if we consider the canonical forms of the equations, we see that a hypersurface described by

has a centre of symmetry and it is in it if and only if . On the other hand, a hypersurface described by

has no centre of symmetry.

### Affine types of hypersurfaces of second degree over

The above means that the type of equation which describes a given hyperspace is a constant. I.e. if in a basic system it takes one of the forms

, or

, or

, in any other basic system it cannot take any of the other forms.

Moreover, if is the number of variables with coefficient in the above equation, then if the equation is

then is the same for every basic system in which this equation is in the canonical form.
If the equation is of form

, or

, then we also get such a result but up to multiplication by , i.e. in every basic system in which the equation is in the canonical form, we get or variables with coefficient .

These possibilities are called the affine types of hypersurfaces.

We shall say that the hypersurface is proper, if it is not included in any hyperspace.

### Affine types of proper curves of second degree in

Thus we have the following affine types of proper curves of second degree in :

two parallel lines

hyperbola

ellipse

a pair of intersecting lines

parabola

### Affine types of proper surfaces of second degree in

Thus we have the following affine types of proper surfaces of second degree in (images by Wikipedia):

a pair of parallel planes

hyperbolic cylinder

elliptic cylinder

hyperboloid of two sheets

hyperboloid of one sheet

ellipsoid

pair of intersecting planes

elliptic cone

parabolic cylinder

elliptic paraboloid

hyperbolic paraboloid

# 6. Linear algebra, continued

Part 1.: Problems, solutions.
Part 2.: Problems, solutions.
Part 3.: Problems, solutions.
Part 4.: Problems, solutions.
Part 5.: Problems, solutions.

### Idea

If you have tried to imagine a linear map of a plane, you usually imagine that it stretches or squeezes the plane along some directions. It would be nice to know whether a given linear map (such a map, which the same domain and range, is called an endomorphism), it is actually of this type. In other words we would like to know whether there exists a non-zero vector and a scalar , such that simply multiples by (so it stretches or squeezes the space in the direction of ), so:

If and have such properties, then is said to be an eigenvector of and an eigenvalue.

### Determining eigenvalues

Notice that if is an eigenvalue of a map and is its eigenvector, then . Therefore if is a matrix of (in standard basis), then

, where is the identity matrix.

Since multiplication of a matrix by a vector gives a linear combination of its columns and is a non-zero vector, we see that the columns of can be non-trivially combined to get the zero vector! It is possible if and only if .

How to find the eigenvalues of a map? Simply one needs to solve the equation . E.g let . Then:

Therefore:

So we have to solve the following:

And therefore the eigenvalues are: and .

### Eigenspaces

Now let’s find eigenvectors related to subsequent eigenvalues. Notice that since is a linear map, if are eigenvectors for an eigenvalue , then for any scalar also and are eigenvectors for . Therefore, the set of all eigenvectors for forms a linear subspace. Notice that satisfies the equation

so the space of eigenvectors (i.e. eigenspace) for (denoted as ) is given by the following system of equations:

and we can easily find its basis.

In our example, let us find a basis of , so let . Then:

Therefore, we have the following system of equations:

The space of solutions is , and its basis is . Indeed, and .

Let’s find a basis of , so let . Then:

The system of equations:

In the reduced ,,stair-like” form:

The space of solutions is , and its basis is . Indeed, .

### Eigenvector basis

If the sum of dimensions of spaces related to the eigenvalues of a given map equals the dimension of the whole space (as in our example: ), then the basis of the whole space which consists of the vectors from the bases of subspaces related to the eigenvalues is called an eigenvector basis (in our case: ).

If a map has an eigenvector basis, then it can be actually described by means of squeezing and stretching in the directions of eigenvectors. Notice that the matrix of such a map in an eigenvector basis is an diagonal matrix (has non-zero elements only on its diagonal) with eigenvalues related to subsequent eigenvectors on its diagonal. In our example:

It may happen that a map has no eigenvectors (e.g. a rotation of the plane) or that the subspaces of eigenvectors are to small (e.g. a 10 degree rotation of a three-dimensional space around an axis had only one-dimensional space of eigenvectors).

### Diagonalization of a matrix

A matrix is diagonalizable, if there exists a matrix , such that:

where is a diagonal matrix.

How to check it and diagonalize a matrix if it is possible? Simply consider a linear map such that is its matrix in standard basis. Matrix is diagonalizable, if and only if has eigenvector basis . Then:

and , since .

E.g. we know that

is diagonalizable, since related to this matrix has an eigenvector basis. furthermore in this case:

and

### Calculating diagonalizable matrices

Diagonalization of a matrix has the following interesting matrix. We can use it to calculate a power of a matrix, if it is diagonalizable. Notice that if is a basis, then:

Therefore:

but if is a basis of eigenvectors, then is a diagonal matrix so calculating its power is simply calculating powers of the elements on the diagonal.

Let us show it on an example. Let us calculate:

We have:

Therefore:

So:

### Scalar product

To study angles between vectors it will be convenient to use scalar product. The scalar product of two vectors , is . Standard scalar product in is sum of products on subsequent places, so e.g.: .

### Length of a vector and angles between vectors

By Pitagoras Theorem it is easy to see that is the square of the length of a vector, e.g. . The length of a vector , also called the norm of , will be denoted as . We get that:

Assume now that we are given three vectors and forming a triangle. So . Let be the angle between i . The law of cosines states that:

Therefore:

So:

So cosine of an angle between vectors is given by the following formula:

One more application of scalar product is calculating perpendicular projection of a vector onto a direction given by a second vector. Let be the perpendicular projection of onto direction given by . It will have the same direction as and length , where is the angle between and . Therefore:

because: is the vector of length in direction of .

### Perpendicularity, perpendicular spaces

We know that two vectors are perpendicular if cosine of the angle between them equals zero. Therefore if and only if .

Notice that if we would like to find all vectors perpendicular to , then the above is the equation we have to solve. Moreover this is a linear uniform equation. If we would like to find the set of vectors perpendicular to all vectors from a given list, then we will get a system of uniform linear equations. So given a linear subspace , the set (called the orthogonal complement of ) of all vectors perpendicular to all the vectors from is also a linear subspace! It is the space of solutions of some system of linear equations.

For example, let . A vector is perpendicular to those vectors (and so also to every vector of ), if and , in other words, if it satisfies the following system of equations:

so:

So the general solution has the following form: , and therefore we have the following basis of : .

Notice that the coefficients in a system of equations describing given linear space are vectors which span the perpendicular space! Which is a new insight into our method of finding a system of equations for a space given by its spanning vectors.

### Isometries of linear euclidean spaces

A linear mapping , where , are linear euclidean spaces is an isometry if the following equivalent conditions hold

• is a linear isomorphism and preserves the inner product

• is a linear isomorphism and preserves the lengths of vectors

### Linear isometries: projection onto a linear subspace and reflection across a linear subspace

Notice that a projection onto a linear subspace and reflection across are linear mappings. Moreover, it is easy to see its eigenvectors:

• since projection does not change vectors in , they are eigenvectors with eigenvalue . On the other hand, vectors from are multiplied by zero, so they are eigenvectors with eigenvalue zero.
• since reflection does not change vectors in , they are eigenvectors with eigenvalue . On the other hand, vectors from are multiplied by , so they are eigenvectors with eigenvalue .

So basis consisting of vectors from a basis of and of vectors from a basis of is a basis of eigenvectors of both those maps. Which make it possible to calculate their formulas.

E.g. let . Therefore basis is . So if is the projection onto , and is the reflection across , then are eigenvectors with eigenvalue of both maps. Also is an eigenvector with eigenvalue zero for , and for . Therefor basis is a basis of eigenvectors of both maps, and::

Let us calculate their formulas. We have:

So:

Therefore:

and

so:

and

### Gram’s determinant

Given a system of vectors in an euclidean space, the matrix is called the Gram’s matrix of this system of vectors and its determinant is called Gram’s determinant . Immediately on can notice, that Gram’s matrix is symmetrical.

Notice in particular, that if the columns of a matrix contain the coordinates of those vectors in a othonormal basis, then . Thus, if the number of vectors equals the dimension of the whole space (i.e. if is a square matrix), then .

In particular, always . And it is equal to zero if and only if the system of vectors is linearly dependent.

### Parallelepipeds and simplexes

Let oraz are linearly independent. Then the set

is called a -parallelepiped given by this system.

Given affine independent system of points , the set

is called a -dimensional simplex with vertices in . Obviously, one dimensional simplex is a line segment, two dimensional is a triangle, and three-dimensional is a tetrahedron.

### -dimension measure of parallelepipeds and simplexes

-dimensional measure is a generalization of length (), area () and volume ().

To explain how to calculate the -dimensional measure of a -dimensional simplex or parallelepiped, we have to notice that if is a system of linearly independent vectors in , and is the projection of onto , then
We have , where . Let . Thus,

because

and

Therefore,

But the first is equal to zero because the last column is a combination of the other columns. And the second is equal to (expansion by the last column). Thus

Since, we know that -dimensional measure of a parallelepiped should be the product of -dimension measure of its base by its height (length of an appropriate projection), using induction you can easily come to the conclusion that

So

### Orientation of a space

We say that bases , of a vector space are consistently oriented if , and inconsistently oriented if . Being consistently oriented is thus an equivalence relation on the set of all bases which has two equivalence classes.

An orientation of a vector space is choosing one of those two equivalence classes (e.g. by choosing a basis). Then bases from this equivalence class are said to have positive orientation and all the rest have negative orientation.

# 13. Preparation for the second test

The colloquium is on 5th June. Take some time to prepare for it.

A quadratic form is a function which assigns a number to each vector, in such a way that it is sum of products of two coordinates, e.g. . The square of the norm is also an example of a quadratic form ().

In other words, a quadratic form can be described as , where is a bilinear form. Over field other than always take a symmetric bilinear form, if the characteristic of the field is not equal to , and we are going to make such an assumption from now on.

### Positive and negative definite forms

We can classify forms with respect to possible sign of results:

• form is positively definite, if for all , we get .
• form is negatively definite, if for all , we get .
• form is positively semidefinite, if for all , we get .
• form is negatively semidefinite, if for all , we get .

Obviously a form may not fall in any of those categories, if for some we have and . Such forms are called indefinite.

### Matrix of a form

The matrix of a quadratic form with respect to basis is the matrix , where is a symmetric bilinear form such that . E.g., let , then:

so notice, that the coefficients are divided by outside the diagonal, because the same expression is generated twice.

### Sylvester’s criterion

Sylvester’s criterion determines whether a form is positively definite or negatively definite. Notice that it does not tell anything about the categories with semidefinite forms!

How does it work? We study determinants of minors: let be the matrix of size in the left upper corner of the matrix of a form we study. Let be the size of the matrix of this form. Sylvester’s criterion consists of the two following facts:

• if for any we have , the form is positively definite,
• if for any we have for even , and , for odd , then the form is negatively definite.

E.g. let , so its matrix: , so and , therefore and , so form is positively definite.

E.g., let , so its matrix: , so and , also , therefore and and , so form is negatively definite.

Finally let , its matrix: , so and , therefore and , so form is neither positively definite nor negatively definite.

But to check everything (including semi definiteness), we have to diagonalize the form, i.e. find a basis in which its matrix is diagonal (a diagonal congruent matrix). Then, obviously if:

• it has only positive entrees on the diagonal, then is positive definite,
• it has only negative entrees on the diagonal, then is negative definite,
• it has only nonnegative entrees on the diagonal, then is positive definite,
• it has only nonpositive entrees on the diagonal, then is negative semi definite,
• it has a positive and a negative entree on the diagonal, then is nondefinite.

It can be done it the tree following methods

### Diagonalization of a form: complementing to squares

We may complement a formula of a form to squares making sure to use all expressions with the first variable first, and then all with the second one, and so on.

E.g.

where , i , so the form is non-definite. The basis , in which the formula is expressed is ,
because

### Diagonalization of a form: orthogobal basis

We may also find an orthogonal basis with respect to the symmetrical bilinear form related to the considered quadratic form. Then the entrees on the diagonal are the values of the form on the vectors from this basis.

### Diagonalization of a form: eigenvalues

Finally, we shall remind ourselves that there exists a basis consisting of eigenvectors of a self-adjoint endomorphism described by the same matrix, which is orthogonal with respect to the symmetrical bilinear form related to the considered quadratic form. Then the entrees on the diagonal are the eigenvalues of the matrix.

E.g.: let , the matrix: , so its characteristic polynomial: has zeroes in and , so it has eigenvalues of both signs, so is is indefinite.

# 5. Preparation for the second test

The test will take place on May 13th. You may find the following material helpful during your time of independent studies.

# 4. Basic linear algebra

Part 1.: Problems, solutions.
Part 2.: Problems, solutions.
Part 3.: Problems, solutions.
Part 4.: Problems, solutions.
Part 5.: Problems, solutions.
Part 6.: Problems, solutions.
Part 7.: Problems, solutions
Part 8.: Problems, solutions.
Part 9.: Problems, solutions.

A system of linear equations with variables is simply a set of equations, in which the variables appear without any powers or multiplication between them:

where are some real numbers. E.g. the following is a system of 3 linear equations with four variables:

A solution of such a system of linear equations is a tuple of four numbers (four is the number of variables in this case), which satisfies all the equations in the system (after substitution of those numbers under the variables). E.g. (2,0,0,0) in our case. But a system can have more than one solutions. For example, (22,-7,1,0) is also a solution to the above system. More precisely a system of linear equations can have 0, 1, or infinitely many solutions.

A general solution of a system of linear equations is just a description of a set of all its solutions (which sometimes may mean saying that there are no solutions at all, or pointing out the only one). How to find a general solution? We will use so called Gaussian elimination method. Less formally we will call it transformation of our system of equations into a echelon form.

The first step is to write down a matrix of a given system of equations. A matrix is simply an rectangular table with numbers. Matrices will play more and more important role in this course, but now we can just think of a matrix of a system of equations as of an abbreviated notation of the system itself. Simply we write down the coefficients separating the column of free coefficients with a line:

Next we make some operations on this matrix. Those operations simplify the matrix, but we shall make sure that they do not change the set of solutions of the corresponding system of equations. Therefore only three types of operations are permitted:

• subtracting from a row another row multiplied by a number (corresponds to subtracting an equation from another one),
• swapping two rows (corresponds to swapping two equations)
• multiplying a row by a non-zero number (corresponds to multiplying both sides of an equation by a non-zero number)

And our aim is to achieve a staircase-like echelon form of a matrix (meaning that you can draw a stairs through a matrix, under which you will have only zeros). It means that in each corresponding equation there will be less variables than in the previous one.

What should we do to achieve such a form? The best method is to generate the echelon from the left side using the first row. Under 1 in the left upper corner we would like to have zeros. To achieve this, we have to subtract the first row multiplied by 2 from the second one and also subtract the first row multiplied by 4 from the third one. So:

So now we would like to have a zero under 1 in the second row. Therefore we subtract the second row from the third:

And so we have achieved an echelon form! We have leading coefficient in the first row on the first variable and a leading coefficient in the second row on the second variable.

The next step is to reduce the matrix (transform it into a reduced echelon form). We would like to have zeros also above the leading coefficients. In our case we only need to do something with 3 on the second place in the first row. To have a zero there we have to subtract the second row multiplied by 3 from the first one.

We need also to make sure that the leading coefficients are always ones (sometimes we have to multiply a row by a fraction number). But in our case it is already done. We have achieved a reduced echelon form. To have a general solution it suffices to write down corresponding equations moving the free variables to the right side of the equations:

We can also write it in parametrized form substituting to a vector all we know in the general solution, so: . Notice that by substituting and by any real numbers we will get a tuple (sequence of four numbers) which is a solution to our system of equations (it has infinitely many solutions). E.g. substituting we get the solution , which was mentioned above.

Finally let us introduce some terminology. A system of linear equations is:

• homogeneous if all constant terms are zero,
• inconsistent if it has no solutions.

For example the following set:

is not homogeneous, has exactly one solution, so it is not inconsistent. Meanwhile:

is homogeneous and has exactly one solution, so is not inconsistent. On the other hand, the following set of equations is inconsistent:

Note that a system of linear equations can have exactly one, infinitely many or no solutions.

### Definition

In many applications we will use the notion of determinant of a matrix. The determinant of a matrix makes sense for square matrices only and is defined recursively:

•

where is matrix with -th row and -th column crossed out. So (the determinant is denoted by or by using absolute value style brackets around a matrix):

therefore:

And so on. E.g.:

### Laplace expansion

The above definition is only a special case of a more general fact called Laplace expansion. Instead of using the first row we can use any row or column (choose always the one with most zeros). So:

for any row . Analogical fact is true for any column.

E.g. for the below matrix it is easiest to use the third column:

### Determinant and operations on a matrix

Notice first that from the Laplace expansion we easily get that if a matrix has a row of zeros (or column) its determinant equals zero.

Consider now different operations on rows of a matrix, which we use to calculate a ,,stair-like” form of a matrix. Using Laplace expansion we can prove that swapping two rows multiplies the determinant by — indeed calculating the determinant using the first column we see that the signs in the sum may change, but also the rows in the minor matrices get swapped.

Immediately we can notice that multiplying a row by a number multiplies also the determinant by this number — you can see it easily calculating Laplace expansion using this row.

Therefore multiplying whole matrix by a number multiplies the determinant by this number many times, precisely:

where is a matrix of size .

Notice also, that the determinant of a matrix with two identical rows equals zero, because swapping those rows does not change the matrix but multiplies the determinant by , so , therefore . So because of the row multiplication rule, if two rows in a matrix are linearly dependent, then its determinant equals .

Also the Laplace expansion implies that if matrices , , differ only by -th row in the way that this row in matrix is a sum of -th rows in matrices and , then the determinant of is the sum of determinants of and , e.g.:

But it can be easily seen that in general !

Finally, consider the most important operation of adding to a row another row multiplied by a number. Then we actually deal with the situation described above. The resulting matrix is matrix , which differs from and only by the row we sum to. Matrix is the original matrix and matrix is matrix , in which we substitute the row we sum to with the row we are summing multiplied by a number. Therefore , but has two linearly dependent rows, so and . Therefore the operation of adding a row multiplied by a number to another row does not change the determinant of a matrix.

### Calculating the determinant via triangular form of matrix

If you look closely enough you will see that the Laplace expansion also implies that the determinant of a matrix in an echelon form (usually called triangular for square matrices) equals the product of elements on the diagonal of the matrix, so e.g.:

Because we know how the elementary operations change, to calculate the determinant of a matrix we can calculate a triangular form, calculate its determinant and recreate the determinant of the original matrix. This method is especially useful for large matrices, e.g.:

Therefore, the determinant of the last matrix is . On our way we have swapped rows once and we have multiplied one row by , therefore the determinant of the first matrix equals .

The above fact also implies how to calculate the determinant of a matrix which is in the block form: with left bottom block of zeros. The determinant of such a matrix equals , e.g.:

### Cramer’s rule

Given a system of equations with variables we may try to solve it with Cramer’s rule. Let be the matrix of this system without the column of free coefficients. Let be the matrix , in which instead of -th column we put the column of free coefficients. Then:

• if , the system has exactly one solution. The solution is given by the following formula: ,
• if , and at least one of is not equal to , the system has no solutions,
• if and for every , , there can be zero or infinitely many solutions — Cramer’s method does not give any precise answer.

E.g. let us solve the following system of equations:

Therefore:

Since , this system has exactly one solution. To determine it we calculate the other determinants:

And so , , .

### Vector spaces

Generally speaking a vector space is a ,,space” consisting of vectors, which can be:

• multiplied by a number and multiplication is distributive with respect to the addition(), and with respect to the addition of numbers () and compatible with multiplication of numbers ()

in which there exists a zero vector such that , for any vector and for each vector there exists a vector (inverse to ) such that .

The plane is a classic example of a vector space. Vectors have a form of a pair of numbers , and can be added one to another and multiplied by a number . There exists a zero vector and for each vector there is an inverse vector, .

But obviously a space consisting of vectors of the form of sequence of larger number of numbers, e.g. 4, is also a vector space. Although it is not easy to imagine this space geometrically, addition and multiplication works in the same way as in the two-dimensional case.

### Vector subspaces

A vector subspace is a subset of a vector space closed under the operations on vectors, meaning that if vectors are in , then is also in and for any number , also is in .

To prove that a subset is a vector subspace we have to prove that for any two vectors in this subset their sum is also in this subset and that for any vector in the subset and any number, their multiplication is in the subset. To prove that a subset is not a vector subspace it suffices to find two vectors in the subset with the sum outside it or a vector in the subset and a number such that their multiplication does not belong to the subset.

For example, the line is a vector subspace of the plane, since if belong to this line, then they are of form i and their sum is and also belongs to the line. Similarly for every number a, belongs to this line.

On the other hand the set is not a vector subspace, because vector , which is in the set, multiplied by gives , which is not in the set.

### Linear combinations

Given a finite set of vectors , any vector of form , where are some numbers is called their linear combination. For example vector (-2,1) is a linear combination of vectors and because . How to calculate those coefficients without simply guessing them? Notice that we look for numbers , such that . This is actually a system of two linear equations:

it suffices to solve it:

so . If this system were contradictory, it would mean that our vector is not a linear combination of those two vectors.

The set of all linear combinations of vectors is denoted as .

### Linear independence

A set of vectors will be called linearly independent, if none of them is a linear combination of others. It is quite an impractical definition, because to check whether a set of four vectors is linearly independent directly from this definition we have to check that four systems of equations are contradictory. We need a different but equivalent definition.

Notice that if a vector is a linear combination of others, e.g. , then the zero vector can be written as , so the zero vector is a linear combination of vectors from our set in which at least one coefficient is non-zero. It seems that we are close to the another definition of linear independence: a set of vectors is linearly independent, if in every their combination giving the zero vector all the coefficients are zeros. In other words the only way to get the zero vector is the trivial way: multiplying them all by zero.

How to check that a given set of vectors is linearly independent? Obviously we have to calculate the coefficients of their linear combination giving the zero vector (if the only solution is that all the coefficients are zero, then the set is linearly independent). So we need to check whether a system of linear equations has only one solution.

For example, let us check whether are linearly independent. We solve a system of equations:

We already know that the system has only one solution because we have a stair in every row. Therefore this set of vectors is linearly independent.

There is also a second method of checking whether a set of vector is linearly independent. Notice that the operations on rows of a matrix are just creating linear combinations of the vectors written in the rows of the matrix. If we manage to get a row of zeros, it means we have a non-trivial linear combination of rows giving the zero vector, and so the set of vectors written in the rows is not linearly independent. Let us check whether are linearly independent.

we get a row of zeros, so are not linearly independent.

### Describing a vector space

How to describe mathematically a given vector subspace? There are few ways:

1. list a set of vectors which spans it,
2. write out its basis,
3. give a homogeneous system of linear equations which describes it.

The above needs few more words of explanation. We would also like to know how to move from one of the descriptions to another.

### A spanning system of vectors

It is easy to notice that the set of all linear combinations of a given set of vectors is always a vector space. We check it in the case of three vectors, let , then , for some numbers a, b, c, d, f, g. Therefore, and for any number k, are also linear combinations of , so they are in , so we are dealing with a vector space.

Therefore, , or are examples of vector spaces, defined by listing the sets of vectors which span them.

### Basis

A system of vectors which spans a given vector space and additionally is linearly independent is called its basis. Basis of a vector space is not unique, e.g. (so called standard basis), and also are both bases of the plane. But in every basis of a given vector space you will find the same number of vectors. This number is called the dimension of the given vector space and denoted as .

Obviously, a basis is in particular a system of vector which spans a given space.

Every vector in the given space is a linear combination of vectors from the basis. Furthermore, the coefficients in this combinations are uniquely determined and called coordinates. So, e.g. the vector (1,1) has coordinates 1,1, with respect to the basis and (0,-1) with respect to the same basis has coordinates -2,-1. Obviously, we calculate the coordinates of a given vector with respect to a given basis by solving a system of equations. We have done it already before while determining whether a vector is a linear combination of other vectors. This time the system will always have exactly one solution.

### A system of equations describing a space

Notice that given a homogeneous system of linear equations and two its solutions, both their sum and a solution multiplied by any number are also solutions of the system. Let us check it in the following example. Vectors (1,-1,0) and (0,2,2) are solutions of the equation . Indeed, and , so also and .

Therefore, a set of solutions to a homogeneous system of linear equations is always a vector space! And a homogeneous system itself is one of possible descriptions of a vector space.

Now let us learn how to when given one form of description find another one.

### Finding a basis from a set of vectors which spans the space

Given a set of vectors we would like to find a linearly independent set, which spans the same space. We have mentioned that in a matrix when we carry out the operations on rows we do not change the set of possible linear combinations of vectors written in the rows. Furthermore, notice that in a echelon form of matrix all vectors from the non-zero rows are linearly independent. Indeed, to get the zero vector in the combination of them, we need to multiply the first one by zero because of the first place. Therefore, we need to multiply the second one by zero because of the second place. An so on. Thus, all we need to do is to write the given vectors in the rows of a matrix and calculate its echelon form. The non-zero rows form a basis of the space.

For example, let us find a basis of the space .

So is a basis of the given space, and the space has dimension 2.

### Finding a basis of a space described by a system of equations

We start by finding a general solution to the given system in the parametrized form. E.g.

so the general solution is of form: , so every vector in the given space is of form , for some . But notice that . Therefore every vector in the space is a linear combination of vectors , which I have calculated by substituting for one parameter and zero for others. It is also easy to see that those three vectors are linearly independent. So a is a basis of the space and the space has dimension . Notice also that , and 5 is the number of the variables, and 2 is the number of independent equations in the system.

### Finding a system of equations describing a vector space given by a system of vectors which spans it (or a basis)

Assume that we study the vector space . Therefore, our system of equations will have 4 variables. Assume that there will be an equation of form in it. The three given vectors have to satisfy this equation. So , which actually gives a system of equations which has to be fulfilled by the coefficients of the system we are looking for. A system of equations describing the coefficients of a system of equations, looks a bit confusing, but bear with me. Let us write this system down:

And solve it:

So the general solution, which will tell us what are the coefficients in the system we are looking for, is the following: . Notice that the system we are trying to find is not unique, many systems of equations are equivalent. Actually, the set of possible coefficients of an equation which is fulfilled by any vector in the given space forms also a vector space. We write down its basis using the calculated general solution: . If we take those two vectors and write down two equations with such coefficients we will be able to get any equation which is fulfilled by any vector of our given space by combining them linearly. It means the system we look for is the system with coefficients from the calculated basis, namely:

And we are done!

### Matrix multiplication

To multiply matrices, the first has to have the same number of columns as the second one of rows. The resulting matrix will have as many rows as the first one, and as many columns as the second one. We multiply the rows of the first matrix by the columns of the second in the sense that in the resulting matrix in a place in -th row and -th column we write the result of multiplication of -th row of the first matrix with the -th column of the second one, where by multiplication of row and column we mean multiplication of pairs of subsequent numbers summed up. E.g.:

Matrix multiplication is associative, which means that for any , . But is not commutative! Notice that if the matrices are not square it is impossible to multiply them conversely. If they are square the result may be different.

### Inverse matrix

A matrix is inverse to matrix , if , where is the identity matrix (the matrix with ones on the diagonal and zeros everywhere else). The inverse matrix is denoted as . Since and , we see that . This implies that only matrices with non-zero determinants can have their inverses. Therefore we call such matrices invertible.

How to calculate the inverse of a given matrix? We have mentioned recently that the operations on rows of a matrix leading to the reduced “stair-like” for is actually multiplication by a matrix. Imagine that we transform the matrix consisting of matrix along with the identity matrix into the reduced “stair-like” form. Since is a square matrix with non-zero determinant, we will get identity matrix on the left side: . But notice that if is the matrix of rows operations, then . Therefore and . The first equation implies that . The second that . So we get the inverse matrix on the right after those operations!

E.g. let us calculate the inverse of the following matrix:

So:

And therefore:

### Linear maps

A linear map is a map which maps vectors from a given space , to vectors from another linear space (), and satisfying the linear condition, which says that for every vectors and numbers we have . E.g. a rotation around is a linear map . Given two vectors and scalars we will get the same vector regardless of whether we rotate the vector first and then multiply by numbers and add them, or multiply by numbers, add and then rotate.

Therefore, to prove that a given map is a linear map we need to prove that for any two vectors and any two numbers it satisfies the linear condition. E.g. given as is a linear map because if and , then .

Meanwhile, to disprove that a map is linear we need to find an example of two vectors along with two numbers such that the linear condition fail for them. E.g. given as is not a linear map because .

Usually, linear maps will be given by their formulas. E.g. , . Then to see what does to a given vector, e.g. , we substitute it to the formula: . By the way, it is easy to see, that this is simply the rotation around by 90 degrees clockwise. Less geometrical example: let , , therefore maps vector to vector .

Sometimes we can define a linear map by giving its values on the vectors from a given basis only. This suffices to determine this map. E.g. let be given in the following form: (vectors , constitute a basis of the plane). We can calculate the formula of this map. First calculate the coefficients of the standard basis in the given one (by solving a system of equations or by guessing). In this case we see that (coefficients: and (coefficients: ). Therefore, .

In the above example we can also come to the conclusion what is and by writing down a matrix consisting of the vectors of the given basis on the left hand side and values on the right hand side. Each operation on the rows of the matrix does not change the principle that the vector is on the left, and its value on the right, because is a linear map. So after transforming the matrix to the reduced echelon form on the right hand side one will find and .

so as before

Given two linear maps and a number , their sum and are linear maps . Obviously we add and multiply by coefficients. So e.g. if , then .

It means that the set of all linear maps between fixed vector spaces and is a vector space itself. It is denoted by .

### Kernel and image of a linear transformation

If is a linear transformation, then it is easy to see that

called the kernel of and

called the image of are linear subspaces of and respectively. The dimension of the image of is also called the rank of and denoted by . Notice that if spans , then spans . Moreover, if , and is a basis of , then is a basis of . Therefore,

### Matrix of a linear map

Matrices will be important for us for yet one more reason. It turns out that multiplying a matrix by a vector is actually the same as calculating a value of a linear map. Observe the following:

is exactly the same as .

More precisely, given a linear map , basis of and basis of , a matrix which multiplied from the right by a (vertical) vector of coordinates of a vector in basis will give the coordinates of in will be called the matrix of in basis and .

In particular in the above example:

is the matrix of in standard basis — and can be easily written out of the formula defining simply by writing its coefficients in rows.

### Changing bases of a matrix — first method: calculate coordinates.

Assume that we are given a formula defining , as above (or its matrix in the standard basis) and basis and . E.g.: and . We would like to calculate .

Notice that if we multiply this matrix from the right by (vertical) vector , then I will get simply the first column of this matrix. On the other hand the first vector from , namely vector , has in this basis coordinates 1,0,0. Therefore the result of the multiplication are the coordinates of in basis and this is the first column of .

So: and we have to find the coordinates of this vector in : , so the coordinates are 4,8 and this is the first column of the matrix we would like to calculate.

Let us find the second column. We do the same as before but with the second vector from basis . has coordinates 1,0 in .

The third column: has in coordinates , so

Therefore:

### Composing maps

Given two linear maps and we can compose them and consider a map which transforms a vector from first via and the result via getting a vector form .

Such a map is denoted as . Given formulas defining i we can easily get the formula for . E.g. consider as above and such that . Therefore .

Now look at the matrices of those maps. If is basis of , is basis and is basis of , then given coordinates of in to get the coordinates of in we have to first multiply it by matrix (we will get coordinates in ) and multiply the result by . Therefore we have multiplied the coordinates by , which means that:

Notice which bases have to agree in this formula!

In particular in our example:

which is consistent with the formula we have calculated before.

### Change-of-coordinates matrix

There is a special linear map, which we call the identity, which does nothing. For example, in it is . Therefore given two basis and , along with matrix if we multiply this matrix from the right by the coordinates of a vector in basis we will get the coordinates also of (as ), but in basis . So matrix changes the basis from to .

Especially we will need matrices changing basis from the standard basis to the given one and from the given basis to the standard one. Let’s check how to calculate them.

It is easy to calculate the change-of-coordinates matrix change from a given basis to the standard basis. We will find (basis is defined in the example above). After multiplying it from the right by we will get its first column. On the other hand the first vector in has in it coordinates 1,0,0, so the result of multiplication are the coordinates of this vector in the standard basis, so simply it is this vector. So simply the -th column of this matrix is the -th vector from the basis. Therefore:

Now, the other case: from the standard basis to a given basis. We will calculate . It can be easily seen that in columns we should put coordinates of vectors from the standard basis in the given basis. Let us calculate them: and . Therefore:

### Changing the basis of a matrix — the second method: multiplication by a change-of-coordinates matrix

We have a new tool to change basis of a matrix of a linear map. Because , we have that:

In particular:

and in this way given a matrix of a linear map in standard basis we can calculate its matrix in basis from to . In our example:

Obviously we get the same result as calculated by the first method.

# 3. Preparation for the first test

The first test will take place on 8/04.