All posts by m_korch

10. Partial orders

Part 1.: Problems.

Definition and examples

A relation \preceq on a set A will be called a partial order if it satisfies the following three conditions:

  • reflexivity: \forall_{a\in A} a\preceq a,
  • antisymmetry: \forall_{a,b\in A} a\preceq b \land b\preceq a\rightarrow a=b,
  • transitivity: \forall_{a,b,c\in A} a\preceq b\land b\preceq c\rightarrow a\preceq c.

Obviously, \leq on the reals is an example of a partial order. Indeed, it is reflexive, antisymmetric and transitive.

The relation of divisibility on the natural numbers is an another example. It is a partial order, because n|n (reflexivity), and if n|m and m|n, then n=m (antisymmetry), and finally if n|m and m|k, then n|k (transitivity).

Furthermore, \subseteq on \mathcal{P}(X), for any set X, is also a partial order.

On the other hand, a relation r on \mathcal{P}(\mathbb{N})\setminus\{\varnothing\} defined as A r B\leftrightarrow \min A\leq \min B, is not a partial order, although it is reflexive and transitive, because it is not antisymmetric: \{0\} r \{0,1\} and \{0,1\} r \{0\}, but \{0\}\neq \{0,1\}.

For any partial order \preceq, we can consider its strict version \preceq\setminus = (which is formally not a partial order, because it is not reflexive), i.e. the order a\prec b\leftrightarrow a\preceq b\land a\neq b.

Maximal, minimal, greatest and least elements

Simple orders on finite sets can be easily graphically presented in the form of a Hasse diagram, in which greater elements are placed above the less elements, and neighbouring elements in the order are connected with lines. E.g. the following diagram shows the order of divisibility on \{2,3,4,5,6,8,9,12,24\}.

Let \preceq be a partial order on a set A. We shall say that m\in A is:

  • a maximal element, if \forall_{a\in A} m\preceq a \rightarrow a=m,
  • a minimal element, if \forall_{a\in A} a\preceq m \rightarrow a=m,
  • the greatest element, if\forall_{a\in A} a\preceq m,
  • the least element, if \forall_{a\in A} m\preceq a.

Notice that in a set there can be many (or none) maximal (or minimal) elements, but there can at most one greatest (or least) element. In particular, if there are more than one maximal (respectively, minimal) elements, they are incomparable, so the greatest (respectively, least) element cannot exist (the reverse implication does not hold).

In the above example, we have three minimal elements: 2,3 and 5, and three maximal elements 5,9 and 24. Therefore the least and greatest elements do not exist. On the other hand in the order \langle \mathbb{N},\leq\rangle there is one minimal element: 0 and it is also the least element. There are no maximal elements, and the greatest element does not exist.

Bounds, infimum and supremum

Given an order \preceq on a set A, we now consider a subset B\subseteq A. We shell say that a\in A is:

  • a lower bound of B, if \forall_{b\in B} a\preceq b,
  • an upper bound of B, if \forall_{b\in B} b\preceq a,
  • the infimum of B (denoted by \inf B), if it is the greatest element in the set of all lower bounds of B,
  • the supremum of B (denoted by \sup B), if it is the least element in the set of all upper bounds of B.

Notice that the infimum or the supremum of a set may not necessarily be an element of the considered subset B\subseteq A. But, if B has its greatest element (respectively, the least element), then it is its supremum (respectively, infimum).

E.g. consider the order \{2,3,4,5,6,8,9,12,24\},|, and let B=\{4,8,12\}. The set of its lower bounds \{2,4\} has the greatest element, so \inf B=4 and it is the least element of B. The set of all upper bounds is \{24\}, so \sup B=24. This time it is not an element of B. On the other hand, let C=\{2,3,6\}. This set has the greatest element, so it is its supremum, \sup C=6. Meanwhile, the set of its lower bounds is empty, so the infimum of B does not exist.

Linear, dense, well founded and well orders

A partial order \langle A,\preceq\rangle is dense, if for all a,b\in A such that a\neq b, there exists c\in A such that a\preceq c\preceq b, but c\neq a and c\neq b. E.g. the order \langle\mathbb{Q},\leq\rangle is dense, because if a,b\in\mathbb{Q}, then (a+b)/2\in\mathbb{Q}. On the other hand, \langle\mathbb{N},\leq\rangle is not dense, because there does not exist such c for a=0 i b=1.

A partial order \langle A,\preceq\rangle is linear, if all two elements are comparable, i.e. if \forall_{a,b\in A} a\preceq b\lor b\preceq a. The order \langle\mathbb{N},\leq\rangle is an example of such an order. On the other hand, \langle\mathcal{P}(\mathbb{N}),\subseteq\rangle is not linear, because neither \{0\}\subseteq \{1\}, nor \{1\}\subseteq\{0\}.

A partial order \langle A,\preceq\rangle is well founded, if every non-empty subset B\subseteq A has a minimal element (or equivalently if there does not exist an infinite sequence of elements of A which is strictly decreasing). Therefore, for example, \langle\mathbb{N},\leq\rangle is a well founded order. The same is true for \langle\mathbb{N},|\rangle. But neither the order \langle\mathbb{Z},\leq\rangle, nor \langle[0,1],\leq\rangle is not well founded.

A partial order is a well order if it is well founded and linear. The order \langle\mathbb{N},\leq\rangle is a well order, but \langle\mathbb{N},|\rangle is not, because it is not linear.

Lexicographical order

Given two partial orders \langle A,\leq_A\rangle oraz \langle B,\leq_B\rangle it is easy to define an order on A\times B. Namely:

    \[\langle a,b\rangle\leq_{lex}\langle a',b'\rangle \leftrightarrow (a\leq_A a' \land a\neq a')\lor (a=a'\land b\leq_B b').\]

Similarly, one can define an order on A^n or even on A^\mathbb{N}.

    \[\langle x_n\rangle\leq_{\text{lex}}\langle y_n\rangle \leftrightarrow \langle x_n\rangle=\langle y_n\rangle \land \exists_k \left(\forall_{m< k} x_m=y_m \land (x_k\leq_A y_k\land x_k\neq y_k)\right).\]

Order isomorphism

Given two partially ordered sets: \langle A,\leq_A\rangle and \langle B,\leq_B\rangle, a function f\colon A\to B, will be called an order isomorphism, if it is a bijection, and if for any a,b\in A, a\leq_A b iff f(a)\leq_B f(b).

Orders such that there exists an order isomorphism between them, will be called isomorphic. It means that those orders are identical from our point of view. For example, \langle \mathbb{N},\leq\rangle is isomorphic to \langle \mathbb{Z}\setminus\mathbb{N},\geq\rangle. The isomorphism is f(n)=-n-1. Indeed, it is a bijection, and n\leq m iff -n-1\geq -m-1.

Invariants

If two partially ordered sets are isomorphic, to prove it, one needs to show an order isomorphism between them. But hoe to prove that two orders are not isomorphic? We will use invariants to do this. An invariant is a property of a partially ordered set which stays invariant under order isomorphisms. An invariant can be any property defined only with notion of order and cardinality. E.g. there exist exactly 3 minimal elements. If an there are exactly 3 minimal elements in a partially ordered set \langle A,\leq_A\rangle, and there are 5 minimal elements in \langle B,\leq_B\rangle, then those two orders cannot be isomorphic.

E.g. the partially ordered sets \langle\mathbb{N}\setminus\{0\},|\rangle and \langle \mathcal{P}(\mathbb{N}),\subseteq \rangle are not isomorphic. An invariant, which differs them is e.g. the following sentence: there exists an element which has exactly 3 elements less or equal to it. In the first ordered set such an element exists: e.g. 4 has exactly 3 divisors: 1,2,4. In the second there is no such element, because the number of subsets of a finite set is always a power of 2.

Zorn’s Lemma

There are a few important statements which are equivalent to the axiom of choice. On of them is Zermelo Theorem, which states that on every set one can define a well order. Other such statement is Zorn’s Lemma. It states that if \langle A,\preceq\rangle is a partially ordered set with A\neq\varnothing, such that any chain (L\subseteq A is a chain, if any two elements of L are comparable with respect to \preceq) has an upper bound, then there is a maximal element in A.

Zorn’s Lemma is an important tool in mathematics which can be used to prove the existence of many important objects. E.g. on can prove that every vector space has a basis (a set of linearly independent vectors which is maximal with respect to inclusion) using Zorn Lemma. Indeed, let V be a vector space, and let A be the family of all linearly independent sets in V ordered by inclusion. A is non-empty, because \varnothing\in A. Now let L be a chain in A. We will prove that \bigcup L\in A, i.e. that a union of a chain of linearly independent sets is linearly independent. Let v_1,\ldots v_n\in \bigcup L. Therefore, there exist U_1,\ldots U_n\in L, such that v_1\in U_1,\ldots v_n\in U_n. Since L is a chain, we among those n sets we can find a set which contains all other as its subsets. Let it be U_k. Bu then v_1,\ldots, v_n\in U_k are elements of a linearly independent set, so are linearly independent. Since those were any vectors from \bigcup L, it is also linearly independent. Thus, \bigcup L is an upper bound of L in A. Hence, there exists a maximal element in A, i.e. a maximal set of linearly independent vectors, in other words a basis.

9. Indefinite integral

Part 1: Problems, solutions.
Part 2: Problems.

Antiderivative

Given a function f, we try to find a function F such that F'(x)=f(x), called antiderivative of f. Such a function may not exist, but it certainly exists if f is continuous. If if exists, there exist infinitely many of them. Indeed, if F(x) is an antiderivative of f, then F(x)+C, where C is an arbitrary constant, also is an antiderivative, because (F(x)+C)'=F'(x)=f(x).

The set of all antiderivatives of f, will be called its indefinite integral and denoted by \int f(x)\,dx. We know how to calculate derivatives, so we can also easily guess integrals of some simple functions. E.g: \int 5x^2=\frac{5x^3}{3}, because \left(\frac{5x^3}{3}\right)'=5x^2.

It is worth noticing that:

    \[\int x^a\,dx=\frac{x^{a+1}}{a+1}+C,\]

if a\neq -1, and:

    \[\int \frac{dx}{x}=\ln |x|+C,\]

it is also clear that:

    \[\int e^x\,dx = e^x+C,\]

    \[\int \sin x\,dx = -\cos x+C,\]

    \[\int \cos x\,dx = \sin x+C,\]

    \[\int \frac{dx}{\sqrt{1-x^2}} = \arcsin x+C,\]

    \[\int \frac{dx}{1+x^2} = \text{arctg} x+C.\]

Therefore, e.g.:

    \[\int \frac{x^2+\sqrt{x}\cos x}{\sqrt{x}}\,dx=\int \left(x^{\frac{3}{2}}+\cos x\right)\, dx=\frac{2 x^{\frac{5}{2}}}{5}+\sin x+C.\]

If we additionally assume that we are looking for a function such that F(0)=1, then we know that C=1.

Integration by parts

But sometimes it is hard to guess a function such that the function we would like to integrate is its derivative. There are two methods which may make it easier, but even those methods require some guessing.

The first one is called the integration by parts. Recall that (fg)'=f'g+g'f, therefore fg=\int (fg)'\,dx=\int(f'g+g'f)\,dx=\int f'g\,dx +\int g'f\, dx. And so:

    \[\int f'(x)g(x)\, dx=f(x)g(x)-\int g'(x)f(x)\,dx.\]

— and this is the theorem of integration by parts.

How to use it? It may happen that we are not able to guess the left-hand side integral, but the right-hand side integral is easy. Usually the hard part is to guess what are the functions f'(x) i g(x).

E.g, let us calculate \int\ln x\,dx. We need to write \ln x as f'(x)g(x). Let therefore f(x)=x (so f'(x)=1) and g(x)=\ln x. Then: \ln x=1\cdot\ln x=f'(x)g(x). Now we use the theorem and get :

    \[\int \ln x\,dx=f(x)g(x)-\int g'(x)f(x)\,dx=\]

    \[=x\ln x-\int\frac{1}{x}\cdot x\, dx=x\ln x-\int 1\, dx=\]

    \[=x\ln x-x+C.\]

Integration by substitution

The second method is called integration by substitution. This time we make use of the formula for integrating composition of functions. Recall that (F(t(x))'=t'(x)F'(t(x))=t'(x)f(t(x)), where F is an antiderivative of f, so F(t)=\int f(t) dt. Therefore, \int f(t)\, dt= F(t)= \int F'(t)\, dx=\int f(t(x))t'(x)\, dx. So finally:

    \[\int f(t(x))t'(x)\, dx=\int f(t)\, dt.\]

It looks quite complicated but it is easy to use. E.g. let us calculate \int 3x^2\sin x^3\, dx — it is easy to see what substitution we should use. Simply let f(t)=\sin t and t(x)=x^3, then t'(x)=3x^2 and 3x^2\sin x^3= t'(x)f(t(x)). It may be even convenient to use the traditional notation of the derivative: \frac{dt}{dx}=3x^2. Therefore (remember to use substitute back x at the end):

    \[\int 3x^2\sin x^3\, dx=\int \sin t\, dt=-\cos t+C=-\cos x^3+C.\]

sin x^3\, dx=\int \sin t\, dt=-\cos t+C=-\cos x^3+C.\]

Integrating rational functions

Rational functions can be integrated by a following method. First we will have to put a given rational function into a form of a sum of simple fractions i.e. functions of form:

    \[\frac{A}{(x-a)^n}\]

and

    \[\frac{Bx+C}{(x^2+px+q)^n},\]

where the polynomial in the denominator has no roots (\Delta<0), and then we will only need to know how to integrate simple fractions.

How to calculated simple fractions from a given ration functions? Those fractions are given by multiplicative form of the polynomial in the denominator. E.g. if:

    \[f(x)=\frac{5x^2-11x}{x^4-2 x^3+3 x^2-4 x+2},\]

first we should notice that x^4-2 x^3+3 x^2-4 x+2=(x-1)^2(x^2+2), and then we know that:

    \[\frac{5x^2-11x}{x^4-2 x^3+3 x^2-4 x+2}=\frac{A}{x-1}+\frac{B}{(x-1)^2}+\frac{Cx+D}{x^2+2},\]

and we can calculate A, B, C, D summing the right side of the equation — in this case A=1, B=-2, C=-1, D=6.

So now we shall calculate integrals of those simple fractions

    \[\frac{A}{(x-a)^n}\]

are easy, because we know that

    \[\int \frac{A}{(x-a)^n}\, dx=\begin{cases}A\ln|x-a|+C & n=1\\ \frac{-A}{(n-1)(x-a)^{n-1}} & n>1\end{cases}.\]

The second type of simple fractions:

    \[\frac{Bx+C}{(x^2+px+q)^n},\]

are more complicated.
Notice, that:

    \[\int\frac{Bx+C}{(x^2+px+q)^n}\,dx=\frac{B}{2}\int \frac{2x+p}{(x^2+px+q)^n}\,dx+\left(C-\frac{Bp}{2}\right)\int\frac{dx}{(x^2+px+q)^n}.\]

and again the first integral is easy, because:

    \[\int \frac{2x+p}{(x^2+px+q)^n}\,dx=\begin{cases}\ln(x^2+px+q)+C & n=1\\ \frac{-1}{(n-1)(x^2+px+q)^{n-1}} & n>1\end{cases}.\]

As for the second one we will need a following tricky substitution t=\frac{x+\frac{p}{2}}{\sqrt{-\Delta/4}}, then \frac{dt}{dx}=\frac{1}{\sqrt{-\Delta/4}} and x^2+px+q=-\frac{\Delta}{4}(t^2+1), and so:

    \[\int\frac{dx}{(x^2+px+q)^n}=\int \frac{\sqrt{-\Delta/4}}{((t^2+1)\cdot (-\Delta/4))^n}\,dt= \left(\frac{-\Delta}{4}\right)^{-n+\frac{1}{2}}\int\frac{dt}{(1+t^2)^n},\]

Finally \int\frac{dt}{(1+t^2)^n} can be easily calculated for n=1, because:

    \[\int\frac{dt}{1+t^2}=\text{arctg} t+C,\]

and for larger n, we have the following formula which can be deduced from the integration by parts:

    \[\int\frac{dt}{(1+t^2)^n}=\frac{1}{2n-2}\frac{t}{(1+t^2)^{n-1}}+\frac{2n-3}{2n-2}\int\frac{dt}{(1+t^2)^{n-1}}.\]

Examples of integration of some rational functions can be found in the second part of exercises.

Substitutions leading to a rational function

Many functions can be changed to a rational function by some simple substitutions. The following substitutions can be used:

  • if we deal with a fraction with x and expressions of form \sqrt[n]{\frac{ax+b}{cx+d}}, we can substitute t= \sqrt[n]{\frac{ax+b}{cx+d}},
  • if we deal with a fraction with e^x to any powers, we can use substitution t=e^x,
  • if we deal with a fraction with \sin x and \cos x, it makes sense to substitute t=\text{tg}\frac{x}{2}. Then \frac{dt}{dx}=\frac{1+\text{tg}^2 \frac{x}{2}}{2}=\frac{1+t^2}{2} and \sin x=\frac{2t}{1+t^2}, \cos x=\frac{1-t^2}{1+t^2} (by trigonometric transformations).

E.g.:

    \[\int\frac{dx}{e^{2x}+1}\]

we substitute t=e^x and get:

    \[\int\frac{dx}{e^{2x}+1}=\int\frac{dt}{t(t^2+1)}=\int\frac{dt}{t}-\int\frac{t\,dt}{t^2+1}=\]

    \[=\ln|t|-\frac{1}{2}\int\frac{2t\,dt}{t^2+1}=\ln|t|-\frac{\ln (t^2+1)}{2}+C=\ln e^x-\frac{\ln (e^{2x}+1)}{2}+C.\]

8. Taylor’s formula and approximations

Using Lagrange theorem to approximate values

Recall that Lagrange Theorem states that for every function differentiable in an interval [x_0, x+h], there exists c in this interval, such that

    \[\frac{f(x_0+h)-f(x_0)}{h}=f'(c),\]

thus

    \[f(x_0+h)=f(x_0)+hf'(c),\]

but for small h we can approximate f'(c)\simeq f'(x_0) and get the value f(x_0+h) approximated by the values of f and f' at x_0, so:

    \[f(x_0+h)\simeq f(x_0)+hf'(x_0).\]

Taylor’s Theorem

Taylor’s Theorem is a very powerful generalization of Lagrange Theorem. It gives us a possibility to approximate a function with polynomials.

Let k\in\mathbb{N} and f\colon\mathbb{R}\to\mathbb{R} has all derivatives, up to (k+1)-th. Letx_0\in\mathbb{R}, then:

    \[f(x)=f(x_0)+\frac{f'(x_0)}{1!}(x-x_0)+\frac{f''(x_0)}{2!}(x-x_0)^2+\frac{f'''(x_0)}{3!}(x-x_0)^3+\]

    \[+\ldots+\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k+\frac{f^{(k+1)}(\theta)}{(k+1)!}(x-x_0)^{k+1},\]

where \theta is some number between x and x_0.

Sum f(x_0)+\frac{f'(x_0)}{1!}(x-x_0)+\frac{f''(x_0)}{2!}(x-x_0)^2+\frac{f'''(x_0)}{3!}(x-x_0)^3+ \ldots+\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k will be called k-th order Taylor polynomial of f in point x_0, and R_k(x)=\frac{f^{(k+1)}(\theta)}{(k+1)!}(x-x_0)^{k+1} is the k-th remainder term.

Calculating approximations

Taylor’s Theorem enables calculating some approximate values of a function in a given point. E.g., \cos 0.01. Its second order Taylor polynomial in 0 is 1+0+\frac{-1}{2!}x^2, which for x=0.01 is 1-\frac{0,0001}{2}=0,99995 and it is \cos 0.01 up to the remainder term, so the possible error is not greater than \left|\frac{-\sin\theta}{3!}(0,01)^3\right|\leq\frac{0,000001}{6}.

Taylor series

Therefore, if on a given interval R_n(x)\to 0, then we can write f as a sum of a series. E.g. for e^x R_n is convergent on the whole real line, so the Taylor series of e^x in point 1 (it is easy because e^{(k)}(x)=e^x) is:

    \[e^x=e+\frac{e}{1!}(x-1)+\frac{e}{2!}(x-1)^2+\ldots=\sum_{n=1}^{\infty}\frac{e(x-1)^n}{n!}.\]

Such a series but for x_0=0 is called a Maclaurin series.

9. Equivalence relations

Definition

A relation \text{sim} on a set A will be called an equivalence relation, it has the following three properties:

  • reflexive: for all a\in A, a\sim a,
  • symmetric: for all a,b\in A, if a\sim b, then b\sim a,
  • transitive: for all a,b,c\in A, ifa\sim b and b\sim c, then a\sim c.

E.g., a relation \mod on natural numbers such that n\mod m if and only if 2021|m-n is an equivalence relation. Indeed, for all n\in\mathbb{N}, we get 2021|n-n=0, so it is reflexive. It is also symmetric, because if 2021|n-m, then 2021|m-n. Finally, it is transitive, since if m\mod n and n\mod k, so 2021|m-n and 2021|n-k, then 2021|(m-n)+(n-k)=m-k, so m\mod k.

Meanwhile the relation \leq on natural numbers is not an equivalence relation, because it is not symmetric, e.g. 0\leq 1, but not 1\leq 0.

Equivalence relations often appear in the real world. E.g. the following relation is an equivalence relation. Two cars are in the relation if and only if they are of the same colour. Another example is the relation of siblings.

Fundamental theorem of equivalence relations

Given an equivalence relation \sim on a set A, the set of all elements which are in relation with a given a\in A is called the equivalence class of a and denoted by [a]_\sim. So [a]_\sim=\{b\in A\colon a\sim b\}. The family of all equivalence classes is denoted by A/\sim and called the quotient set.

It is easy to notice that if a\sim b, then [a]_\sim=[b]_\sim (the key role in this fact is played by transitivity). On the other hand, if a and b are not in relation \sim, then [a]_\sim\cap[b]_\sim=\varnothing. This observation implies the fundamental theorem of equivalence relations, which states that an equivalence relation on a set is actually the same as a partition of this set.

A family \mathcal{A}\subseteq \mathcal{P}(A) will be called a partition of A, if for any X,Y\in\mathcal{A}, X\cap Y=\varnothing, if only X\neq Y, and \bigcup\mathcal{A}=A.

The fundamental theorem states that every equivalence relation on a set generates a partition of the set, precisely the partition into its equivalence classes. On the other hand, every partition of a set generates an equivalence relation on this set. It is the relation such that two elements are in it, if they are in the same element of the partition.

E.g. the relation on the set of cars in which two cars are in the relation if and only if they are of the same colour, generates a partition of the set of cars into classes of equivalence related to each colour of cars.

E.g. relation on \mathbb{R} such that x\sim y, if and only if x-y\in\mathbb{Z} generates the partition of \mathbb{R} into equivalence classes of this relation, e.g. [1/2]_\sim=\{z+1/2\colon z \in\mathbb{Z}\}, which are related to each number from the interval [0,1).

The partition of\{0,1,2,3\} into \{\{0\},\{1,2,3\}\} generates the following equivalence relation: \{\left<0,0\right>,\left<1,1\right>,\left<1,2\right>,\left<1,3\right>,\left<2,1\right>,\left<2,2\right>,\left<2,3\right>,\left<3,1\right>,\left<3,2\right>,\left<3,3\right>\}.

Cardinality of equivalence classes and of quotient set

Often we have to calculate the cardinality of each of the equivalence class of a given relation and the cardinality of its quotient set. E.g, in the relation \mathbb{R} such that x\sim y if and only if x-y\in\mathbb{Z}), no two distinct numbers in [0,1) are in relation, so function f\colon [0,1)\to \mathbb{R}/\sim given as f(x)=[x]_{\sim} is one-to-one, and therefore |\mathbb{R}/\sim|\geq \mathfrak{c}. On the other hand, any partition of the reals is of cardinality \leq \mathfrak{c}, so |\mathbb{R}/\sim|= \mathfrak{c}. Finally, if x\in \mathbb{R}, then [x]_{\sim}=\{y\in\mathbb{R}\colon x-y\in\mathbb{Z}\}=\{z+x\colon z\in\mathbb{Z}\}, and therefore |[x]_\sim|=|\mathbb{Z}|=\aleph_0, for any x\in\mathbb{R}.

7. Analysis of a real function

Part 1.: Problems, solutions.
Part 2.: Problems, solutions.

Analysis of a real function

We will find and study:

  • the domain and zeroes
  • continuity, limits in the points in which function is not continuous and at the ends of intervals,
  • asymptotes,
  • differentiability and derivatives,
  • intervals of monotonicity and extrema
  • second derivative, convexity, inflection points
  • the table of the function,
  • parity, periodicity,
  • sketch of the graph,
  • range.

An example

We will study g(x)=\frac{x^3}{1-x^2}.

the domain and zeroes

The denominator 1-x^2=0, if x\in\{-1,1\}. Therefore, D_f=\mathbb{R}\setminus\{-1,1\}. Moreover, g(x)=0 iff x=0.

Continuity, limits in the points in which function is not continuous and at the ends of intervals,

The function is continuous on (-\infty,-1),(-1,1) and (1,\infty).

Furthermore:

    \[\lim_{x\to-\infty}g(x)=\infty\]

    \[\lim_{x\to-1^-}g(x)=\infty\]

    \[\lim_{x\to-1^+}g(x)=-\infty\]

    \[\lim_{x\to 1^-}g(x)=\infty\]

    \[\lim_{x\to 1^+}g(x)=-\infty\]

    \[\lim_{x\to\infty}g(x)=-\infty\]

Asymptotes

Therefore we have vertical asymptotes x=-1 i x=1. There are no horizontal asymptotes.

We check oblique asymptotes:

    \[\lim_{x\to \infty} \frac{g(x)}{x}=\lim_{x\to\infty} \frac{x^2}{1-x^2}=\lim_{x\to\infty} \frac{1}{1/x^2-1}=-1.\]

    \[\lim_{x\to \infty} g(x)+x=\lim_{x\to\infty}\frac{x^3+x-x^3}{1-x^2}=\lim_{x\to\infty} \frac{1}{1/x-x}=0.\]

Therefore, y=-x is the right oblique asymptote.

    \[\lim_{x\to -\infty} \frac{g(x)}{x}=\lim_{x\to-\infty} \frac{x^2}{1-x^2}=\lim_{x\to-\infty} \frac{1}{1/x^2-1}=-1.\]

    \[\lim_{x\to -\infty} g(x)+x=\lim_{x\to-\infty}\frac{x^3+x-x^3}{1-x^2}=\lim_{x\to-\infty} \frac{1}{1/x-x}=0.\]

Therefore, y=-x is also the left oblique asymptote.

Differentiability and derivatives

The funtion in differentiable on the whole domain and:

    \[g'(x)=\dfrac{3x^2(1-x^2)+2x^4}{(1-x^2)^2}=-\dfrac{x^2(x^2-3)}{(1-x^2)^2}.\]

Intervals of monotonicity and extrema

x^2(x^2-3)=0 if x=0 or x=\pm \sqrt{3}. Therefore:

  • on (-\infty,-\sqrt{3}) we have g'(x)<0, so f decreases,
  • on (-\sqrt{3},-1) we have g'(x)>0, so f increases,
  • on (-1,0) we have g'(x)>0, so f increases,
  • on (0,1) we have g'(x)>0, so f increases,
  • on (1,\sqrt{3}) we have g'(x)>0, so f increases,
  • on (\sqrt{3},\infty) we have g'(x)<0, so f decreases.

Therefore, in x=-\sqrt{3} the function has its local minimum, and in x=\sqrt{3} its local maximum.

Second derivative, convexity, inflection points

    \[g''(x)=\left(\dfrac{x^2(x^2-3)}{(1-x^2)^2}\right)'=-\dfrac{2x(x^2+3)}{(1-x^2)^3},\]

  • on (-\infty,-1) we have g''(x)>0, so f is convex,
  • on (-1,0) we have g''(x)<0, so f is concave,
  • on (0,1) we have g''(x)>0, so f is convex,
  • on (1,\infty) we have g''(x)<0, so f is concave.

Therefore 0 is an inflection point.

The table of the function

g' g'' g
(-\infty,-\sqrt{3}) <0 >0 decreasing, convex
-\sqrt{3} =0 >0 local minimum
(-\sqrt{3},-1) >0 >0 increasing, convex
-1 vertical asymptote
(-1,0) >0 <0 increasing, concave
0 =0 =0 inflection point
(0,1) >0 >0 increasing, convex
1 vertical asymptote
(1,\sqrt{3}) >0 <0 increasing, concave
\sqrt{3} =0 <0 local maximum
(\sqrt{3},\infty) <0 <0 decreasing, wklęsła.

parity, periodicity

The function is odd, since f(-x)=\frac{-x^3}{(1-x^2)}=-f(x). Therefore it is not even, because it is not a constant zero function. Obviously, it is not a periodic function.

Sketch of the graph

Range

Obviously, R_g=\mathbb{R}.

6. Derivatives

Part 1.: problems, solutions.
Part 2.: problems, solutions.
Part 3.: problems, solutions.
Part 4.: problems, solutions.
Part 5.: problems, solutions.

Definition

We say that a function f is differentiable in point x_0, if there exists the limit:

    \[\lim_{x\to x_0}\frac{f(x)-f(x_0)}{x-x_0}\]

and is finite. Then this limit is called the derivative of f in x_0 and denoted as f'(x_0).

E.g. function f(x)=3x^2 is differentiable in x=2, because:

    \[\lim_{x\to 2}\frac{f(x)-f(2)}{x-2}=\lim_{x\to 2}\frac{3x^2-12}{x-2}=\]

    \[=\lim_{x\to 2}\frac{3(x-2)(x+2)}{x-2}=\lim_{x\to 2}3(x+2)=12,\]

so: f'(2)=12.

Function g(x)=\sqrt{|x|} is not differentiable in x=0, because:

    \[\lim_{x\to 0}\frac{\sqrt{|x|}-0}{x-0}=\lim_{x\to 0}\frac{1}{\sqrt{|x|}}=\infty,\]

so this limit is infinite.

Function h(x)=|x+1| is not differentiable in x=-1, because:

    \[\lim_{x\to -1^+}\frac{f(x)-f(-1)}{x-(-1)}=\lim_{x\to -1^+}\frac{x+1-0}{x-(-1)}=1,\]

but

    \[\lim_{x\to -1^-}\frac{f(x)-f(-1)}{x-(-1)}=\lim_{x\to -1^-}\frac{-x-1-0}{x-(-1)}=-1,\]

so the limit we need does not exist.

Derivatives of some simple functions

It is easy to calculate the following facts. They are useful to calculate derivatives of more complicated functions. Let a\in\mathbb{R}

    \[(a)'=0,\]

    \[(x^a)'=ax^{a-1},\]

    \[(a^x)'=a^x \ln a,\]

    \[(\sin x)'=\cos x,\]

    \[(\cos x)'=-\sin x,\]

    \[(\log_a x)'=\frac{1}{x\ln a}.\]

Arithmetic of derivatives

The arithmetic theorem of limits of functions implies immediately that:

    \[(f(x)+g(x))'=f'(x)+g'(x)\]

    \[(f(x)-g(x))'=f'(x)-g'(x)\]

if respective derivatives exist.

It is also easy to notice that in the case of multiplication it is not so easy — multiplying the expression in the definition does not give the expression for derivative of the product. Nevertheless, it is easy to check that:

    \[(f(x)\cdot g(x))'=f(x)g'(x)+f'(x)g(x)\]

and

    \[\left(\frac{f(x)}{g(x)}\right)'=\frac{f'(x)g(x)-f(x)g'(x)}{(g(x))^2}\]

if respective derivatives exist.

E.g. let f(x)=x^2\sin x+2^x-5. Then: f'(x)=(x^2\sin x)'+(2^x)'-(5)'=2x\sin x +x^2\cos x+2^x \ln 2+0=2x\sin x +x^2\cos x+2^x \ln 2+0.

Composition of functions

We have the following theorem. Given functions f,g and h(x)=f(g(x)), we have h'(x)=g'(x)f'(g(x)), if the derivatives of g in x and of f in g(x) exist.

Therefore e.g. (\sin x^2)'=2x\cos x^2.

Local extrema and intervals of monotonicity

Local minimum or respectively local maximum of a function is an argument x_0, such that there exists an interval (x_0-\delta,x_0+\delta), such that f(x_0) is the least or respectively the greatest value of the function in this interval.

We have the following theorem: if a function f has in x_0 local extreme and the derivative, then f'(x_0)=0. Attention: the reverse implication does not hold, so the points in which the derivative equals zero are merely the candidates for extrema.

E.g. f(x)=x^2 has minimum in x=0 and indeed f'(x)=2x, so f'(0)=0. On the other hand for g(x)=x^3 we get g'(x)=3x^2 so again g'(0)=0, but for x=0 the function does not have a local extreme.

We say that a function is strictly increasing (respectively non-decreasing, strictly decreasing, non-increasing) on an interval (a,b) if for any x,x', such that a<x<x'<b, we have f(x)<f(x') (respectively: f(x)\leq f(x'), f(x)>f(x'), f(x)\geq f(x')).

The following theorem holds: if f'(x)>0 (respectively: f'(x)\geq 0, f'(x)<0, f'(x)\leq 0) for any x\in (a,b), then f is strictly increasing (respectively non-decreasing, strictly decreasing, non-increasing) on (a,b).

E.g. for f(x)=x^2 we have f'(x)=2x<0 for x<0 and >0 for x>0. So f is strictly decreasing on (-\infty, 0) and strictly increasing on (0,\infty).

Therefore a function f continuous in x_0 has in x_0 local maximum (respectively minimum), if there exists an interval (x_0-\delta,x_0+\delta), such that f is differentiable in its every point, and f'(x)\geq 0 (respectively f'(x)\leq 0) for x\in (x_0-\delta,x_0) and f'(x)\leq 0 (respectively f'(x)\geq 0) for x\in (x_0,x_0+\delta).

E.g. x=0 is the local minimum of f(x)=x^2, given what we calculated above about its derivative.

Derivative of the inverse function

If f is strictly monotone and continuous, then (f^{-1})'(y)=\frac{1}{f'(f^{-1}(y))} if derivative f exists in f^{-1}(y) and is non-zero.

E.g. let f(x)=x^3. Then f^{-1}(y)=\sqrt[3]{y}. Therefore:

    \[(\sqrt[3]{y})'=\frac{1}{f'(\sqrt[3]{y})}=\frac{1}{3(\sqrt[3]{y})^2}=\frac{1}{3} y^{-\frac{2}{3}}.\]

Geometric interpretation — tangent line

It is easy to see that the derivative of a function is the limit of tangents of the angles of secant lines intersecting the graph of the function in x_0 and x. Therefore, it is the tangent of the angle of the tangent line to the graph in x_0. Therefore, if f is continuous in x_0, then y=f'(x_0)(x-x_0)+f(x_0) is the equation of the tangent line to f in the point x=x_0.

E.g.: f=x^2. Therefore the tangent line to this parabola in x=3 is y=(2\cdot 3)(x-3)+3^2=6x-9.

Rolle and Lagrange theorems

Rolle theorem states that if f is continuous on [a,b] and differentiable on (a,b) and f(a)=f(b), then there exists c\in (a,b), such that f'(c)=0.

Therefore Lagrange theorem holds: if f is continuous on [a,b] and differentiable on (a,b), then there exists c\in (a,b), such that f'(c)=\frac{f(a)-f(b)}{a-b}.

L’Hôpital’s rule

L’Hôpital’s rule makes it possible to calculate some difficult cases in arithmetic of limits of functions using derivatives. If \lim_{x\to x_0}f(x)=\lim_{x\to x_0} g(x)=0 or \lim_{x\to x_0}f(x)=\lim_{x\to x_0} g(x)=\pm\infty and also \frac{f(x)}{g(x)} and \frac{f'(x)}{g'(x)} are defined on (x_0-\delta, x_0+\delta)\setminus\{x_0\} for some \delta, then \lim_{x\to x_0}\frac{f(x)}{g(x)}=\lim_{x\to x_0}\frac{f'(x)}{g'(x)}, if the second limit exists.

E.g. we calculate \lim_{x\to 0^+}\frac{\frac{1}{x}}{\ln x}. We have \frac{1}{x}\underrightarrow{x\to 0^+} \infty and \ln x\underrightarrow{x\to 0^+} -\infty. Also \left(\frac{1}{x}\right)'=-\frac{1}{x^2} and (\ln x)'=\frac{1}{x}. Therefore, \lim_{x\to 0^+}\frac{\frac{1}{|x|}}{\ln |x|}=\lim_{x\to 0^+}\frac{-\frac{1}{x^2}}{\frac{1}{x}}=\lim_{x\to 0^+}-\frac{1}{x}=-\infty.

Higher-order derivatives

Obviously, if the derivative of a function is differentiable, we can calculate its derivative, the derivative of the derivative, f''(x), which describes how the first derivative changes. And further we can calculate the third, fourth, etc. derivatives. Generally n-th derivative will be denoted by f^{(n)} and f^{(n+1)}(x)=f^{(n)}(x).

E.g., for f(x)=x^3, we get:

    \[f'(x)=3x^2\]

    \[f''(x)=(3x^2)'=6x\]

    \[f'''(x)=(6x)'=6\]

    \[f^{(4)}=0\]

and for any n\geq 4,

    \[f^{(n)}=0\]

Condition for the existence of a local extremum

The following theorem holds. If for some n>0:

  • f is differentiable in x up to at least 2n-th derivative,
  • f'(x)=f''(x)=\ldots=f^{(2n-1)}=0,
  • f^{(2n)}(x)>0 (respectively: f^{(2n)}(x)<0),

then f has a minimum (respectively, maximum) in x.

E.g. f(x)=x^8-x^4 has maximum in x=0, because:

    \[f'(x)=8x^7-4x^3\Rightarrow f'(0)=0\]

    \[f''(x)=56x^6-12x^2\Rightarrow f''(0)=0\]

    \[f'''(x)=336x^5-24x\Rightarrow f'''(0)=0\]

    \[f^{(4)}(x)=1680x^4-24\Rightarrow f^{(4)}(0)=-24\]

Convex and concave functions, inflection points

We shall say that f is convex on an interval (a,b), if its graph between any two points x,y\in (a,b) is under its secant line given by those points. In other words if for any x,y\in(a,b) and t\in [0,1]:

    \[f(tx+(1-t)y)<tf(x)+(1-t)f(y).\]

If the reverse inequality holds, the function f is concave. If f is convex on (a,b) and concave on (b,c) (or reversely) and continuous in c, we will say that c is an inflection point.

The following theorems hold:

  • if f' exists on a given interval and is increasing (respectively, decreasing), then f is convex (respectively, concave) on this interval,
  • if f'' exists on a given interval and is always positive (respectively, negative), then f f is convex (respectively, concave) on this interval,

E.g.: f(x)=x^3. We get f'(x)=3x^2, f''(x)=6x is positive on (0,\infty) so f is convex on this interval and negative on (-\infty,0) — so f is concave on it. Point 0 is an inflection point.

5. Convergence and continuity of a function

Part 1.: Problems, solutions.
Part 2.: Problems, solutions.
Part 3.: Problems, solutions.
Part 4.: Problems, solutions.
Part 5.: Problems, solutions.

Limit of a function in a point

A very similar notion to the notion of limit of a sequence is the notion of limit of a function in a point. It describes the behaviour of values of the function when the arguments are nearer and nearer to a given number. There are two equivalent definitions of this notion: due to Heine and to Cauchy.

We will say that a function f has at a point x_0 limit g (denoted as \lim_{x\to x_0} f(x)=g), if for any sequence x_n which converges to x_0 and such that for any n, x_n\neq x_0, the limit of the sequence f(x_n) exists and equals g.

Simple example: function f(x)=x^2 has at 2 limit 4, because from the arithmetic of limits of sequences we know that if x_n converges to 2, then f(x_n)=x_n^2 converges to 2^2=4.

On the other hand, function

    \[g(x)=\begin{cases}-1, x<0\\ 1,x\geq 0\end{cases}\]

has no limit at 0, because for sequence x_n=\frac{1}{n+1}, f(x_n)\to 1, and for y_n=\frac{-1}{n+1}, f(y_n)\to -1.

Cauchy’s definition

The equivalent definition is the following. We will say that a function f has limit g at point x_0, if:

    \[\forall_{\varepsilon>0}\exists_{\delta}\forall_{x\in (x_0-\delta,x_0+\delta)\setminus\{x_0\}} |g-f(x)|<\varepsilon\]

which means that for arbitrarily positive small number \varepsilon there exists a small interval around x_0, such that values of the function in this small interval differ from the limit no more than \varepsilon.

Let us check, that lim_{x\to 0}x^2=0. Let \varepsilon>0. It suffices to set \delta=\sqrt{\varepsilon}. Then, if x\in(-\delta,\delta), then actually x^2<\delta^2=\varepsilon.

Infinite limits and limits at infinity

Using Heine’s definition we see that we can easily consider also limits of a function if x\to\infty lub x\to-\infty. Simply we can take all sequences x_n, which converge respectively to \infty or -\infty and look at the limits of sequences f(x_n). So for example, \lim_{x\to-\infty} \frac{x^3-x}{2x^3+1}=\lim_{x\to-\infty} \frac{1-\frac{1}{x^2}}{2+\frac{1}{x^3}}=\frac{1-0}{2-0}=\frac{1}{2}.

Also it can occur that the limit of a function at a given point is \infty or -\infty, if for all sequences x_n\to x_0 are such that the limit of f(x_n) is respectively \infty or -\infty. E.g. \lim_{x\to 0}\frac{1}{|x|}=\infty}.

Arithmetic of limits

The Heine’s version of definition immediately implies that the arithmetic of limits of functions works in the same way as the arithmetic of limits of sequences. E.g. \lim_{x\to -2}\frac{x^3+3x^2+6x+8}{x^2-4}=\lim_{x\to -2}\frac{(x+2)(x^2+x+4)}{(x+2)(x-2)}=\lim_{x\to -2}\frac{x^2+x+4}{x-2}=\frac{4-2+4}{-2-2}=\frac{6}{-4}=-3/2.

One-sided limits

Left limit (from below) of a function (denoted as \lim_{x\to x_0^-} f(x)) equals g, if for any sequence x_{n} which converges to x_0, such that for any n, x_n<x_0 we get f(x_n)\to g. Similarly we define right limit (from above) (denoted as \lim_{x\to x_0^+} f(x)) equals g, if for any sequence x_{n} which converges to x_0, such that for any n, x_n>x_0 we get f(x_n)\to g

E.g. \lim_{x\to 0^-} \frac{1}{x}=-\infty and \lim_{x\to 0^+} \frac{1}{x}=\infty.

A function has a limit at x_0 if and only if it has both limits in those points and they are equal.

Substitution theorem

We have the following theorem: \lim_{x\to x_0}f(g(x))=g if \lim_{y\to y_0}f(y)=g and \lim_{x\to x_0}g(x)=y_0, if for some neighbourhood of x_0, f(x)\neq y_0 for x\neq x_0.

Sounds a bit complicated but is very convenient. E.g. let us calculate the limit of function \frac{3^x-3}{9^x-3^x-6} for x=1. Let y(x)=3^x. If x\to 1, then y\to 3, and y(x)=3 if only x=1. Therefore:

    \[\lim_{x\to 1}\frac{3^x-3}{9^x-3^x-6}=\lim_{y\to 3}\frac{y-3}{y^2-y-6}=\lim_{y\to 3}\frac{y-3}{(y-3)(y+2)}=\lim_{y\to 3}\frac{1}{y+2}=\frac{1}{5}.\]

Asymptotes

Asymptotes are lines which are lines to which the diagram of a function converges. Asymptotes can be vertical, horizontal or oblique.

asym

If \lim_{x\to x_0^+} f(x)=\pm\infty, then line x=x_0 is a right vertical asymptote. Analogously, if \lim_{x\to x_0^-} f(x)=\pm\infty, this line is a left vertical asymptote. E.g. \lim_{x\to 0^+}\frac{1}{x}=\infty oraz \lim_{x\to 0^-}\frac{1}{x}=-\infty, so asymptote x=0 is a vertical asymptote of this function.

If \lim_{x\to\infty} f(x)=g or \lim_{x\to-\infty} f(x)=g, then line y=g is a respectively right or left horizontal asymptote of this function. Since \lim_{x\to\infty} 2^{-x}+2015=2015, y=2015 is a horizontal asymptote of 2^{-x}+2015=2015.

A line y=ax+b is an oblique asymptote (respectively left or right), if \lim_{x\to-\infty}f(x)-ax-b=0 or \lim_{x\to+\infty}f(x)-ax-b=0. If such a line is asymptote (assume it is a right asymptote), then a=\lim_{x\to\infty}\frac{f(x)}{x} oraz b=\lim_{x\to\infty}(f(x)-ax).

E.g.: let f(x)=\frac{2x^2-3}{3x-1}, then \lim_{x\to \infty}\frac{f(x)}{x}=\lim_{x\to \infty}\frac{2x^2-3}{3x^2-x}=\frac{2}{3} and \lim_{x\to\infty}\left(f(x)-\frac{2x}{3}\right)=\lim_{x\to\infty}\frac{\frac{2}{3}x-3}{3x-1}=\frac{2}{9}. Therefore, y=\frac{2}{3}x+\frac{2}{9} is an oblique asymptote of this function.

Continuity

A function is continuous at a point x_0, if the limit \lim_{x\to x_0}f(x) exists and equals f(x_0). Obviously all simple arithmetic functions are continuous in every point of their domains. E.g. if f(x)=x^2, \lim_{x\to 2} f(x)= 4= f(2).

A function which is continuous at every point of its domain will be simply called continuous.

Function g(x)=|x| is continuous. Obviously it is continuous for all non-zero points. It is also continuous for x=0, because \lim_{x\to 0^+} |x|=\lim_{x\to 0^+} x = 0 and \lim_{x\to 0^-} |x|=\lim_{x\to 0^-} -x = 0, and therefore \lim_{x\to 0} |x| = 0=|0|.

Let h be a function defined in the following way:

    \[h(x)=\begin{cases} x^2\colon x\neq 2,\\ 2015\colon x=2\end{cases}.\]

This function is continuous in all points except 2, and is not continuous at 2, because \lim_{x\to 2} h(x)=4\neq 2015= h(2).

Let function r be defined as follows:

    \[r(x)=\begin{cases} -1\colon x<0,\\ 1\colon x\geq 0\end{cases}.\]

This function is continuous for x\neq 0, but is not continuous in 0, because \lim_{x\to 0^+} r(x)=1, but \lim_{x\to 0^-} r(x)=-1, and therefore it has no limit at point x=0.

Function w defined in the following way:

    \[w(x)=\begin{cases} 1\colon x\in\mathbb{Q},\\ 0\colon x\notin\mathbb{Q}\end{cases}.\]

is an example of function which has no limit in any point. Therefore is also not continuous at any point.

Darboux property

Continuous function on an interval [a,b] have the following (intuitively obvious) property: if f(a)<y<f(b), then there exists c, such that a<c<b, and f(c)=y. Therefore, for example, since for w(x)=-x^3+5x-1 is continuous and w(0)=-1 and w(1)=3, and therefore w(x) has at least one root in the interval (0,1).

Uniform continuity

If we are able to choose \delta for each \varepsilon, in a such way that in any interval of length \delta values of functions do not differ more than \varepsilon, universally regardless of place x, then we say that the function if uniformly continuous. More formally a function f is uniformly continuous, if:

    \[\forall_{\varepsilon>0}\exists_{\delta}\forall_{x_1,x_2} |x_1-x_2|\leq\delta\rightarrow |f(x_1)-f(x_2)|\leq\varepsilon.\]

Function f(x)=x is a very simple example. It is uniformly continuous, because for any \varepsilon set \delta=\varepsilon. Then for all x_1,x_2 such that |x_1-x_2|\leq\delta, then obviously |f(x_1)-f(x_2)|\leq\varepsilon.

On the other hand, function g(x)=\sin\frac{1}{x} on the interval (0,\infty) is continuous but is not uniformly continuous, because if \varepsilon=1, regardless how small \delta we choose, we can find 0<x_1<x_2<\delta, such that \sin\frac{1}{x_1}=-1, \sin\frac{1}{x_2}=-1, and therefore |f(x_1)-f(x_2)|=2>1.