Definition and examples
A relation on a set will be called a partial order if it satisfies the following three conditions:
- reflexivity: ,
- antisymmetry: ,
- transitivity: .
Obviously, on the reals is an example of a partial order. Indeed, it is reflexive, antisymmetric and transitive.
The relation of divisibility on the natural numbers is an another example. It is a partial order, because (reflexivity), and if and , then (antisymmetry), and finally if and , then (transitivity).
Furthermore, on , for any set , is also a partial order.
On the other hand, a relation on defined as , is not a partial order, although it is reflexive and transitive, because it is not antisymmetric: and , but .
For any partial order , we can consider its strict version (which is formally not a partial order, because it is not reflexive), i.e. the order .
Maximal, minimal, greatest and least elements
Simple orders on finite sets can be easily graphically presented in the form of a Hasse diagram, in which greater elements are placed above the less elements, and neighbouring elements in the order are connected with lines. E.g. the following diagram shows the order of divisibility on .
Let be a partial order on a set . We shall say that is:
- a maximal element, if ,
- a minimal element, if ,
- the greatest element, if ,
- the least element, if .
Notice that in a set there can be many (or none) maximal (or minimal) elements, but there can at most one greatest (or least) element. In particular, if there are more than one maximal (respectively, minimal) elements, they are incomparable, so the greatest (respectively, least) element cannot exist (the reverse implication does not hold).
In the above example, we have three minimal elements: and , and three maximal elements and . Therefore the least and greatest elements do not exist. On the other hand in the order there is one minimal element: and it is also the least element. There are no maximal elements, and the greatest element does not exist.
Bounds, infimum and supremum
Given an order on a set , we now consider a subset . We shell say that is:
- a lower bound of , if ,
- an upper bound of , if ,
- the infimum of (denoted by ), if it is the greatest element in the set of all lower bounds of ,
- the supremum of (denoted by ), if it is the least element in the set of all upper bounds of .
Notice that the infimum or the supremum of a set may not necessarily be an element of the considered subset . But, if has its greatest element (respectively, the least element), then it is its supremum (respectively, infimum).
E.g. consider the order , and let . The set of its lower bounds has the greatest element, so and it is the least element of . The set of all upper bounds is , so . This time it is not an element of . On the other hand, let . This set has the greatest element, so it is its supremum, . Meanwhile, the set of its lower bounds is empty, so the infimum of does not exist.
Linear, dense, well founded and well orders
A partial order is dense, if for all such that , there exists such that , but and . E.g. the order is dense, because if , then . On the other hand, is not dense, because there does not exist such for i .
A partial order is linear, if all two elements are comparable, i.e. if . The order is an example of such an order. On the other hand, is not linear, because neither , nor .
A partial order is well founded, if every non-empty subset has a minimal element (or equivalently if there does not exist an infinite sequence of elements of which is strictly decreasing). Therefore, for example, is a well founded order. The same is true for . But neither the order , nor is not well founded.
A partial order is a well order if it is well founded and linear. The order is a well order, but is not, because it is not linear.
Given two partial orders oraz it is easy to define an order on . Namely:
Similarly, one can define an order on or even on .
Given two partially ordered sets: and , a function , will be called an order isomorphism, if it is a bijection, and if for any , iff .
Orders such that there exists an order isomorphism between them, will be called isomorphic. It means that those orders are identical from our point of view. For example, is isomorphic to . The isomorphism is . Indeed, it is a bijection, and iff .
If two partially ordered sets are isomorphic, to prove it, one needs to show an order isomorphism between them. But hoe to prove that two orders are not isomorphic? We will use invariants to do this. An invariant is a property of a partially ordered set which stays invariant under order isomorphisms. An invariant can be any property defined only with notion of order and cardinality. E.g. there exist exactly minimal elements. If an there are exactly minimal elements in a partially ordered set , and there are minimal elements in , then those two orders cannot be isomorphic.
E.g. the partially ordered sets and are not isomorphic. An invariant, which differs them is e.g. the following sentence: there exists an element which has exactly elements less or equal to it. In the first ordered set such an element exists: e.g. has exactly divisors: . In the second there is no such element, because the number of subsets of a finite set is always a power of .
There are a few important statements which are equivalent to the axiom of choice. On of them is Zermelo Theorem, which states that on every set one can define a well order. Other such statement is Zorn’s Lemma. It states that if is a partially ordered set with , such that any chain ( is a chain, if any two elements of are comparable with respect to ) has an upper bound, then there is a maximal element in .
Zorn’s Lemma is an important tool in mathematics which can be used to prove the existence of many important objects. E.g. on can prove that every vector space has a basis (a set of linearly independent vectors which is maximal with respect to inclusion) using Zorn Lemma. Indeed, let be a vector space, and let be the family of all linearly independent sets in ordered by inclusion. is non-empty, because . Now let be a chain in . We will prove that , i.e. that a union of a chain of linearly independent sets is linearly independent. Let . Therefore, there exist , such that . Since is a chain, we among those sets we can find a set which contains all other as its subsets. Let it be . Bu then are elements of a linearly independent set, so are linearly independent. Since those were any vectors from , it is also linearly independent. Thus, is an upper bound of in . Hence, there exists a maximal element in , i.e. a maximal set of linearly independent vectors, in other words a basis.
Given a function , we try to find a function such that , called antiderivative of . Such a function may not exist, but it certainly exists if is continuous. If if exists, there exist infinitely many of them. Indeed, if is an antiderivative of , then , where is an arbitrary constant, also is an antiderivative, because .
The set of all antiderivatives of , will be called its indefinite integral and denoted by . We know how to calculate derivatives, so we can also easily guess integrals of some simple functions. E.g: , because .
It is worth noticing that:
if , and:
it is also clear that:
If we additionally assume that we are looking for a function such that , then we know that .
Integration by parts
But sometimes it is hard to guess a function such that the function we would like to integrate is its derivative. There are two methods which may make it easier, but even those methods require some guessing.
The first one is called the integration by parts. Recall that , therefore . And so:
— and this is the theorem of integration by parts.
How to use it? It may happen that we are not able to guess the left-hand side integral, but the right-hand side integral is easy. Usually the hard part is to guess what are the functions i .
E.g, let us calculate . We need to write as . Let therefore (so ) and . Then: . Now we use the theorem and get :
Integration by substitution
The second method is called integration by substitution. This time we make use of the formula for integrating composition of functions. Recall that , where is an antiderivative of , so . Therefore, . So finally:
It looks quite complicated but it is easy to use. E.g. let us calculate — it is easy to see what substitution we should use. Simply let and , then and . It may be even convenient to use the traditional notation of the derivative: . Therefore (remember to use substitute back at the end):
sin x^3\, dx=\int \sin t\, dt=-\cos t+C=-\cos x^3+C.\]
Integrating rational functions
Rational functions can be integrated by a following method. First we will have to put a given rational function into a form of a sum of simple fractions i.e. functions of form:
where the polynomial in the denominator has no roots (), and then we will only need to know how to integrate simple fractions.
How to calculated simple fractions from a given ration functions? Those fractions are given by multiplicative form of the polynomial in the denominator. E.g. if:
first we should notice that , and then we know that:
and we can calculate summing the right side of the equation — in this case .
So now we shall calculate integrals of those simple fractions
are easy, because we know that
The second type of simple fractions:
are more complicated.
and again the first integral is easy, because:
As for the second one we will need a following tricky substitution , then and , and so:
Finally can be easily calculated for , because:
and for larger , we have the following formula which can be deduced from the integration by parts:
Examples of integration of some rational functions can be found in the second part of exercises.
Substitutions leading to a rational function
Many functions can be changed to a rational function by some simple substitutions. The following substitutions can be used:
- if we deal with a fraction with and expressions of form , we can substitute ,
- if we deal with a fraction with to any powers, we can use substitution ,
- if we deal with a fraction with and , it makes sense to substitute . Then and (by trigonometric transformations).
we substitute and get:
Using Lagrange theorem to approximate values
Recall that Lagrange Theorem states that for every function differentiable in an interval , there exists in this interval, such that
but for small we can approximate and get the value approximated by the values of and at , so:
Taylor’s Theorem is a very powerful generalization of Lagrange Theorem. It gives us a possibility to approximate a function with polynomials.
Let and has all derivatives, up to -th. Let, then:
where is some number between and .
Sum will be called -th order Taylor polynomial of in point , and is the -th remainder term.
Taylor’s Theorem enables calculating some approximate values of a function in a given point. E.g., . Its second order Taylor polynomial in is , which for is and it is up to the remainder term, so the possible error is not greater than .
Therefore, if on a given interval , then we can write as a sum of a series. E.g. for is convergent on the whole real line, so the Taylor series of in point (it is easy because ) is:
Such a series but for is called a Maclaurin series.
A relation on a set will be called an equivalence relation, it has the following three properties:
- reflexive: for all , ,
- symmetric: for all , if , then ,
- transitive: for all , if and , then .
E.g., a relation on natural numbers such that if and only if is an equivalence relation. Indeed, for all , we get , so it is reflexive. It is also symmetric, because if , then . Finally, it is transitive, since if and , so and , then , so .
Meanwhile the relation on natural numbers is not an equivalence relation, because it is not symmetric, e.g. , but not .
Equivalence relations often appear in the real world. E.g. the following relation is an equivalence relation. Two cars are in the relation if and only if they are of the same colour. Another example is the relation of siblings.
Fundamental theorem of equivalence relations
Given an equivalence relation on a set , the set of all elements which are in relation with a given is called the equivalence class of and denoted by . So . The family of all equivalence classes is denoted by and called the quotient set.
It is easy to notice that if , then (the key role in this fact is played by transitivity). On the other hand, if and are not in relation , then . This observation implies the fundamental theorem of equivalence relations, which states that an equivalence relation on a set is actually the same as a partition of this set.
A family will be called a partition of , if for any , , if only , and .
The fundamental theorem states that every equivalence relation on a set generates a partition of the set, precisely the partition into its equivalence classes. On the other hand, every partition of a set generates an equivalence relation on this set. It is the relation such that two elements are in it, if they are in the same element of the partition.
E.g. the relation on the set of cars in which two cars are in the relation if and only if they are of the same colour, generates a partition of the set of cars into classes of equivalence related to each colour of cars.
E.g. relation on such that , if and only if generates the partition of into equivalence classes of this relation, e.g. , which are related to each number from the interval .
The partition of into generates the following equivalence relation: .
Cardinality of equivalence classes and of quotient set
Often we have to calculate the cardinality of each of the equivalence class of a given relation and the cardinality of its quotient set. E.g, in the relation such that if and only if ), no two distinct numbers in are in relation, so function given as is one-to-one, and therefore . On the other hand, any partition of the reals is of cardinality , so . Finally, if , then , and therefore , for any .
Analysis of a real function
We will find and study:
- the domain and zeroes
- continuity, limits in the points in which function is not continuous and at the ends of intervals,
- differentiability and derivatives,
- intervals of monotonicity and extrema
- second derivative, convexity, inflection points
- the table of the function,
- parity, periodicity,
- sketch of the graph,
We will study .
the domain and zeroes
The denominator , if . Therefore, . Moreover, iff .
Continuity, limits in the points in which function is not continuous and at the ends of intervals,
The function is continuous on and .
Therefore we have vertical asymptotes i . There are no horizontal asymptotes.
We check oblique asymptotes:
Therefore, is the right oblique asymptote.
Therefore, is also the left oblique asymptote.
Differentiability and derivatives
The funtion in differentiable on the whole domain and:
Intervals of monotonicity and extrema
if or . Therefore:
- on we have , so decreases,
- on we have , so increases,
- on we have , so increases,
- on we have , so increases,
- on we have , so increases,
- on we have , so decreases.
Therefore, in the function has its local minimum, and in its local maximum.
Second derivative, convexity, inflection points
- on we have , so is convex,
- on we have , so is concave,
- on we have , so is convex,
- on we have , so is concave.
Therefore is an inflection point.
The table of the function
The function is odd, since . Therefore it is not even, because it is not a constant zero function. Obviously, it is not a periodic function.
Sketch of the graph
We say that a function is differentiable in point , if there exists the limit:
and is finite. Then this limit is called the derivative of in and denoted as .
E.g. function is differentiable in , because:
Function is not differentiable in , because:
so this limit is infinite.
Function is not differentiable in , because:
so the limit we need does not exist.
Derivatives of some simple functions
It is easy to calculate the following facts. They are useful to calculate derivatives of more complicated functions. Let
Arithmetic of derivatives
The arithmetic theorem of limits of functions implies immediately that:
if respective derivatives exist.
It is also easy to notice that in the case of multiplication it is not so easy — multiplying the expression in the definition does not give the expression for derivative of the product. Nevertheless, it is easy to check that:
if respective derivatives exist.
E.g. let . Then: .
Composition of functions
We have the following theorem. Given functions and , we have , if the derivatives of in and of in exist.
Therefore e.g. .
Local extrema and intervals of monotonicity
Local minimum or respectively local maximum of a function is an argument , such that there exists an interval , such that is the least or respectively the greatest value of the function in this interval.
We have the following theorem: if a function has in local extreme and the derivative, then . Attention: the reverse implication does not hold, so the points in which the derivative equals zero are merely the candidates for extrema.
E.g. has minimum in and indeed , so . On the other hand for we get so again , but for the function does not have a local extreme.
We say that a function is strictly increasing (respectively non-decreasing, strictly decreasing, non-increasing) on an interval if for any , such that , we have (respectively: , , ).
The following theorem holds: if (respectively: , , ) for any , then is strictly increasing (respectively non-decreasing, strictly decreasing, non-increasing) on .
E.g. for we have for and for . So is strictly decreasing on and strictly increasing on .
Therefore a function continuous in has in local maximum (respectively minimum), if there exists an interval , such that is differentiable in its every point, and (respectively ) for and (respectively ) for .
E.g. is the local minimum of , given what we calculated above about its derivative.
Derivative of the inverse function
If is strictly monotone and continuous, then if derivative exists in and is non-zero.
E.g. let . Then . Therefore:
Geometric interpretation — tangent line
It is easy to see that the derivative of a function is the limit of tangents of the angles of secant lines intersecting the graph of the function in and . Therefore, it is the tangent of the angle of the tangent line to the graph in . Therefore, if is continuous in , then is the equation of the tangent line to in the point .
E.g.: . Therefore the tangent line to this parabola in is .
Rolle and Lagrange theorems
Rolle theorem states that if is continuous on and differentiable on and , then there exists , such that .
Therefore Lagrange theorem holds: if is continuous on and differentiable on , then there exists , such that .
L’Hôpital’s rule makes it possible to calculate some difficult cases in arithmetic of limits of functions using derivatives. If or and also and are defined on for some , then , if the second limit exists.
E.g. we calculate . We have and . Also and . Therefore, .
Obviously, if the derivative of a function is differentiable, we can calculate its derivative, the derivative of the derivative, , which describes how the first derivative changes. And further we can calculate the third, fourth, etc. derivatives. Generally -th derivative will be denoted by and .
E.g., for , we get:
and for any ,
Condition for the existence of a local extremum
The following theorem holds. If for some :
- is differentiable in up to at least -th derivative,
- (respectively: ),
then has a minimum (respectively, maximum) in .
E.g. has maximum in , because:
Convex and concave functions, inflection points
We shall say that is convex on an interval (a,b), if its graph between any two points is under its secant line given by those points. In other words if for any and :
If the reverse inequality holds, the function is concave. If is convex on and concave on (or reversely) and continuous in , we will say that is an inflection point.
The following theorems hold:
- if exists on a given interval and is increasing (respectively, decreasing), then is convex (respectively, concave) on this interval,
- if exists on a given interval and is always positive (respectively, negative), then is convex (respectively, concave) on this interval,
E.g.: . We get , is positive on so is convex on this interval and negative on — so is concave on it. Point is an inflection point.
Limit of a function in a point
A very similar notion to the notion of limit of a sequence is the notion of limit of a function in a point. It describes the behaviour of values of the function when the arguments are nearer and nearer to a given number. There are two equivalent definitions of this notion: due to Heine and to Cauchy.
We will say that a function has at a point limit (denoted as ), if for any sequence which converges to and such that for any , , the limit of the sequence exists and equals .
Simple example: function has at limit , because from the arithmetic of limits of sequences we know that if converges to , then converges to .
On the other hand, function
has no limit at , because for sequence , , and for , .
The equivalent definition is the following. We will say that a function has limit at point , if:
which means that for arbitrarily positive small number there exists a small interval around , such that values of the function in this small interval differ from the limit no more than .
Let us check, that . Let . It suffices to set . Then, if , then actually .
Infinite limits and limits at infinity
Using Heine’s definition we see that we can easily consider also limits of a function if lub . Simply we can take all sequences , which converge respectively to or and look at the limits of sequences . So for example, .
Also it can occur that the limit of a function at a given point is or , if for all sequences are such that the limit of is respectively or . E.g. }.
Arithmetic of limits
The Heine’s version of definition immediately implies that the arithmetic of limits of functions works in the same way as the arithmetic of limits of sequences. E.g. .
Left limit (from below) of a function (denoted as ) equals , if for any sequence which converges to , such that for any , we get . Similarly we define right limit (from above) (denoted as ) equals , if for any sequence which converges to , such that for any , we get
E.g. and .
A function has a limit at if and only if it has both limits in those points and they are equal.
We have the following theorem: if and , if for some neighbourhood of , for .
Sounds a bit complicated but is very convenient. E.g. let us calculate the limit of function for . Let . If , then , and if only . Therefore:
Asymptotes are lines which are lines to which the diagram of a function converges. Asymptotes can be vertical, horizontal or oblique.
If , then line is a right vertical asymptote. Analogously, if , this line is a left vertical asymptote. E.g. oraz , so asymptote is a vertical asymptote of this function.
If or , then line is a respectively right or left horizontal asymptote of this function. Since , is a horizontal asymptote of .
A line is an oblique asymptote (respectively left or right), if or . If such a line is asymptote (assume it is a right asymptote), then oraz .
E.g.: let , then and . Therefore, is an oblique asymptote of this function.
A function is continuous at a point , if the limit exists and equals . Obviously all simple arithmetic functions are continuous in every point of their domains. E.g. if , .
A function which is continuous at every point of its domain will be simply called continuous.
Function is continuous. Obviously it is continuous for all non-zero points. It is also continuous for , because and , and therefore .
Let be a function defined in the following way:
This function is continuous in all points except , and is not continuous at , because .
Let function be defined as follows:
This function is continuous for , but is not continuous in , because , but , and therefore it has no limit at point .
Function defined in the following way:
is an example of function which has no limit in any point. Therefore is also not continuous at any point.
Continuous function on an interval have the following (intuitively obvious) property: if , then there exists , such that , and . Therefore, for example, since for is continuous and and , and therefore has at least one root in the interval .
If we are able to choose for each , in a such way that in any interval of length values of functions do not differ more than , universally regardless of place , then we say that the function if uniformly continuous. More formally a function is uniformly continuous, if:
Function is a very simple example. It is uniformly continuous, because for any set . Then for all such that , then obviously .
On the other hand, function on the interval is continuous but is not uniformly continuous, because if , regardless how small we choose, we can find , such that , and therefore .
During our next meeting we are going to read a chapter from the book “Set Theory” concerning the axioms along with problems from this book.
The test is on Thursday, 26/11, 17:00. A lot of past papers can be found here.