Generally speaking a vector space over a field is a ,,space” consisting of vectors, which can be:
- added and addition is associative () and commutative ()
- multiplied by a number from and multiplication is distributive with respect to the addition(), and with respect to the addition of numbers () and compatible with multiplication of numbers (, and )
in which there exists a zero vector such that , for any vector and for each vector there exists a vector (inverse to ) such that .
The plane is a classic example of a vector space over reals. Vectors have a form of a pair of numbers , and can be added one to another and multiplied by a number . There exists a zero vector and for each vector there is an inverse vector, .
But obviously a space consisting of vectors of the form of sequence of larger number of numbers over a given field , e.g. 4, is also a vector space, called . Although it is not easy to imagine this space geometrically, addition and multiplication works in the same way as in the two-dimensional case.
But vector spaces can also be constructed in other different ways. It is easy to check that — the set of all matrices with coefficients in field , — the set of infinite sequences with values in , — the set of all polynomials over and — the set of all functions from a set into field are all vector spaces over . Indeed, for every of these sets operations of addition and multiplication by an element of the field can be easily defined in a natural way.
A vector subspace is a subset of a vector space closed under the operations on vectors, meaning that if vectors are in , then is also in and for any number , also is in .
To prove that a subset is a vector subspace we have to prove that for any two vectors in this subset their sum is also in this subset and that for any vector in the subset and any number, their multiplication is in the subset. To prove that a subset is not a vector subspace it suffices to find two vectors in the subset with the sum outside it or a vector in the subset and a number such that their multiplication does not belong to the subset.
For example, the line is a vector subspace of the plane, since if belong to this line, then they are of form i and their sum is and also belongs to the line. Similarly for every number a, belongs to this line.
On the other hand the set is not a vector subspace, because vector , which is in the set, multiplied by gives , which is not in the set.
The smallest subspace in any vector space is the zero subspace consisting only of the zero vector.
— the set of all polynomials over of degree less or equal to is an interesting example of a subspace of . Indeed, after adding two polynomials from this subset we get a polynomial still in the subset. The same happens with multiplication by an element of .
Given a finite set of vectors , any vector of form , where are some numbers is called their linear combination. For example vector (-2,1) is a linear combination of vectors and because . How to calculate those coefficients without simply guessing them? Notice that we look for numbers , such that . This is actually a system of two linear equations:
it suffices to solve it:
so . If this system were contradictory, it would mean that our vector is not a linear combination of those two vectors.
The set of all linear combinations of vectors is denoted as .
A set of vectors will be called linearly independent, if none of them is a linear combination of others. It is quite an impractical definition, because to check whether a set of four vectors is linearly independent directly from this definition we have to check that four systems of equations are contradictory. We need a different but equivalent definition.
Notice that if a vector is a linear combination of others, e.g. , then the zero vector can be written as , so the zero vector is a linear combination of vectors from our set in which at least one coefficient is non-zero. It seems that we are close to the another definition of linear independence: a set of vectors is linearly independent, if in every their combination giving the zero vector all the coefficients are zeros. In other words the only way to get the zero vector is the trivial way: multiplying them all by zero.
How to check that a given set of vectors is linearly independent? Obviously we have to calculate the coefficients of their linear combination giving the zero vector (if the only solution is that all the coefficients are zero, then the set is linearly independent). So we need to check whether a system of linear equations has only one solution.
For example, let us check whether are linearly independent. We solve a system of equations:
We already know that the system has only one solution because we have a stair in every row. Therefore this set of vectors is linearly independent.
There is also a second method of checking whether a set of vector is linearly independent. Notice that the operations on rows of a matrix are just creating linear combinations of the vectors written in the rows of the matrix. If we manage to get a row of zeros, it means we have a non-trivial linear combination of rows giving the zero vector, and so the set of vectors written in the rows is not linearly independent. Let us check whether are linearly independent.
we get a row of zeros, so are not linearly independent.
An important property of linearly independent systems of vectors is described by Steinitz Theorem. If a system of vectors is linearly independent and , then , and moreover we can choose such that