6. Estimations, series and errors

Using Lagrange theorem to approximate values

Recall that Lagrange Theorem states that for every function differentiable in an interval [x_0, x+h], there exists c in this interval, such that

    \[\frac{f(x_0+h)-f(x_0)}{h}=f'(c),\]

thus

    \[f(x_0+h)=f(x_0)+hf'(c),\]

but for small h we can approximate f'(c)\simeq f'(x_0) and get the value f(x_0+h) approximated by the values of f and f' at x_0, so:

    \[f(x_0+h)\simeq f(x_0)+hf'(x_0).\]

Errors and estimating them

If a physical value f is calculated using measurements, f=f(x_1,\ldots, x_n), where x_1,\ldots, x_n were measured and are equal to x_{1,0},\ldots, x_{n,0} with possible deviation of \Delta x_1,\ldots, \Delta x_n, then we can estimate that the error for f is not greater than:

    \[|\Delta_{\max}(x_{1,0},\ldots, x_{n,0}) f=\left|\frac{\partial f}{\partial x_1}(x_{1,0},\ldots, x_{n,0})\cdot \Delta x_1\right|+\ldots +\left|\frac{\partial f}{\partial x_1}(x_{1,0},\ldots, x_{n,0})\cdot \Delta x_1\right|,\]

and the maximal relative error equals to \delta_{\max}=|f(x_{1,0},\ldots, x_{n,0})/\Delta_{\max}(x_{1,0},\ldots, x_{n,0})|.

E.g., if the measurement of the sides of a cuboid was done with observational error of 0,1 cm and results x=6, y=3, z=1 [cm], and then the volume V(x,y,z)=xyz was calculated, then the error can be estimated in the following way. We get \partial V/\partial x=yz, \partial V/\partial y=xz, \partial V/\partial z =xy, which at (6,3,1) gives: 3, 6 and 18 respectively.
So the maximal error is:

    \[|\Delta V_{\max}|=|3\cdot 0,1|+|6\cdot 0,1|+|18\cdot 0,1|=0,3+0,6+1,8=2,7,\]

so the maximal error is 2,7 cm^3, and the maximal relative error \delta_{\max}=2,7/36=0,075 (bo V=36), which is 7,6\%.

Series

Given a sequence of real numbers a_n, we can consider series \sum_{n=1}^{\infty} a_n, simply as a sequence of partial sums:

    \[S_n=a_1+\ldots +a_n\]

The sum of a given series is the limit of sequence S_n, if it exists. If it is finite, then we say that the series converges.

E.g. series \sum_{n=1}^{\infty}\frac{1}{2^n} is convergent and sums up to 2. Indeed, S_n=1+\frac{1}{2}+\ldots \frac{1}{2^n}=2-\frac{1}{2^n}, which converges to 2.

Necessary condition

It is easy to notice that if series \sum_{n=1}^{\infty} a_n is convergent then the sequence a_n is convergent to zero. In particular, if a_n is not convergent or converges to a non-zero limit, then \sum_{n=1}^{\infty} a_n cannot be convergent.

E.g. series \sum_{n=1}^{\infty} (-1)^n is not convergent, because sequence (-1)^n is not convergent. Also \sum_{n=1}^{\infty}\left(1+\frac{1}{n}\right)^n is not convergent, because \left(1+\frac{1}{n}\right)^n converges to e\neq 0.

Notice that the reverse implication is not true. Series \sum_{n=1}^{\infty} a_n may not be convergent even if a_n converges to zero. E.g. sequence a_n: 1,\frac{1}{2},\frac{1}{2},\frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \ldots.

Arithmetic and comparison criterion

Since series can be seen as sequences, many theorems on arithmetic of limits make sense also in the case of series. For example, limit of sum of two series is the sum of their sums. Notice that multiplying is a bit tricky. Multiplication of elements of series is not the same as multiplication of partial sums.

But we have a criterion which is implied by the three sequences theorem. If |a_n|\leq b_n for all n greater then some number n_0 and \sum_{n=1}^{\infty}b_n is convergent then also \sum_{n=1}^{\infty}a_n converges.

E.g. series \sum_{n=1}^\infty \frac{1}{3^n} is obviously convergent, because 0\leq \frac{1}{3^n}\leq \frac{1}{2^n}.

D’Alembert criterion

D’Alembert criterion states what can be said how convergence of \sum_{n=1}^\infty a_n depending on limit of \frac{|a_{n+1}|}{|a_n|}. If this limit is <1, the series is convergent, but if >1, it is not.

E.g. \sum_{n=1}^{\infty}\frac{2^n}{n!} is convergent, because \frac{a_{n+1}}{a_n}=\frac{2}{n+1}\to 0<1.

Notice that the criterion does not give any result if the limit equals 1.

Cauchy criterion

Cauchy criterion states what can be said how convergence of \sum_{n=1}^\infty a_n depending on limit of \sqrt[n]{a_n}. If this limit is <1, the series is convergent, but if >1, it is not.

E.g. \sum_{n=1}^{\infty}\frac{n}{2^n} is convergent, because \sqrt[n]{a_n}=\frac{\sqrt[n]{n}}{2}\to \frac{1}{2}<1.

Notice that the criterion does not give any result if the limit equals 1.

Leibniz theorem

Leibniz theorem states that if a_n is non-increasing and converges to zero, then \sum_{n=1}^\infty (-1)^{n+1} a_n is convergent.

In particular, \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} is convergent, since \frac{1}{n} is non-increasing and converges to zero.

Absolute convergence

We shall say that \sum_{n=1}^\infty  a_n is absolutely convergent, if \sum_{n=1}^\infty  |a_n| is convergent. Notice that, if a series is absolutely convergent, then it is convergent.

E.g. \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} is convergent, but is not absolutely convergent. On the other hand, \sum_{n=1}^\infty \frac{(-1)^{n+1}}{2^n} is absolutely convergent (and also obviously convergent).

Taylor’s Theorem

Taylor’s Theorem is a very powerful generalization of Lagrange Theorem which we have studied in the previous term. It gives us a possibility to approximate a function with polynomials.

Let k\in\mathbb{N} and f\colon\mathbb{R}\to\mathbb{R} has all derivatives, up to (k+1)-th. Letx_0\in\mathbb{R}, then:

    \[f(x)=f(x_0)+\frac{f'(x_0)}{1!}(x-x_0)+\frac{f''(x_0)}{2!}(x-x_0)^2+\frac{f'''(x_0)}{3!}(x-x_0)^3+\]

    \[+\ldots+\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k+\frac{f^{(k+1)}(\theta)}{(k+1)!}(x-x_0)^{k+1},\]

where \theta is some number between x and x_0.

Sum f(x_0)+\frac{f'(x_0)}{1!}(x-x_0)+\frac{f''(x_0)}{2!}(x-x_0)^2+\frac{f'''(x_0)}{3!}(x-x_0)^3+ \ldots+\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k will be called k-th order Taylor polynomial of f in point x_0, and R_k(x)=\frac{f^{(k+1)}(\theta)}{(k+1)!}(x-x_0)^{k+1} is the k-th remainder term.

Calculating approximations

Taylor’s Theorem enables calculating some approximate values of a function in a given point. E.g., \cos 0.01. Its second order Taylor polynomial in 0 is 1+0+\frac{-1}{2!}x^2, which for x=0.01 is 1-\frac{0,0001}{2}=0,99995 and it is \cos 0.01 up to the remainder term, so the possible error is not greater than \left|\frac{-\sin\theta}{3!}(0,01)^3\right|\leq\frac{0,000001}{6}.

Taylor series

Therefore, if on a given interval R_n(x)\to 0, then we can write f as a sum of a series. E.g. for e^x R_n is convergent on the whole real line, so the Taylor series of e^x in point 1 (it is easy because e^{(k)}(x)=e^x) is:

    \[e^x=e+\frac{e}{1!}(x-1)+\frac{e}{2!}(x-1)^2+\ldots=\sum_{n=1}^{\infty}\frac{e(x-1)^n}{n!}.\]

Such a series but for x_0=0 is called a Maclaurin series.