Polska wersja
Computational Mathematics II
winter semester 2011-12
lecture Monday 1015-1145am room 4050 and classes/lab Monday 1215-145pm
- blackboard classes in room 4050 or lab in room 2042 (MIMUW bdg.,
Banacha 2 - entrance - Pasteura Street)
Program of lab
Some test problems I recommend that you should try to solve them (it is not obligatory but the problems are not difficult and if you
you cannot solve then you should learn more...)
Syllabus
- Iterative methods for solving
- linear systems of equations
- nonlinear systems of equations
- Eigenproblems - methods for finding (approximations) of eigenvalues and eigenvectors of a matrix
- Multidimension integration
There may be a few computer labs (instead of standard "blackboard" classes)
The course is elementary - it is required to know the basics of liner algebra, mathematical
analysis I.
There are lecture notes for this course in Polish.
Evaluation will be based on an oral exam.
Lecture notes
(In Polish)
Piotr Krzyzanowski, Leszek Plaskota, Matematyka Obliczeniowa II, 2010.
Published on-line:
WWW page
(there is a link to pdf file with the lecture notes).
References
Text books
-
James W. Demmel, Applied Numerical Linear Algebra. Society for Industrial and Applied Mathematics (SIAM), Philadelphia 1997.
-
Peter Deuflhard, Andreas Hohmann. Numerical analysis in modern
scientific computing, vol. 43 in Texts in Applied Mathematics. Springer-Verlag, New York,
2nd edition, 2003. An
introduction.
-
J.M. Jankowscy, M. Dryja. Przegląd metod i algorytmów numerycznych, tom
I i II. Biblioteka
inżynierii oprogramowania. Wydawnictwo Naukowo-Techniczne, Warszawa,
1995.
-
C. T. Kelley. Iterative methods for linear and nonlinear equations, vol. 16 of Frontiers in
Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA,
1995.
-
A. Kiełbasiński, H. Schwetlick. Numeryczna algebra liniowa. Wydawnictwa
Naukowo-Techniczne,
1992.
-
D.Kincaid, W.Cheney, Numerical Analysis, 2nd editione, Brooks/Cole, 1996. A general textbook for numerical methods.
-
J. Stoer, R. Bulirsch.
Introduction to numerical analysis.
Translated from the German by R. Bartels, W. Gautschi and C. Witzgall.
Third edition. Texts in Applied Mathematics, 12. Springer-Verlag, New
York, 2002
-
Lloyd N. Trefethen, David Bau, III,
Numerical linear algebra.
Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997.
Monographs or advanced text books
- John E. Dennis Jr., Robert B. Schnabel. Numerical methods for unconstrained optimization and
nonlinear equations. Prentice-Hall Series in Computational Mathematics. Prentice-Hall Inc., En-
glewood Cliffs, N.J., 1983.
-
Peter Deuflhard. Newton methods for Nonlinear Problems. Affine Invariance and Adaptive Algori-
thms. Springer International, 2002.
-
Eugene G. Dyakonov. Optimization in solving elliptic problems. CRC Press, Boca Raton, FL,
1996. Translated from the 1989 Russian original, Translation edited and with a preface by Steve
McCormick.
-
Gene H. Golub, Charles F. Van Loan. Matrix computations. Johns Hopkins Studies in the Mathe-
matical Sciences. Johns Hopkins University Press, Baltimore, MD, 3rd ed., 1996.
-
J. M. Ortega, W. C. Rheinboldt. Iterative solutions of nonlinear equations in several variables.
Academic Press, New York, 1970.
-
Yousef Saad. Iterative methods for sparse linear systems. Society for Industrial and Applied Ma-
thematics, Philadelphia, PA, 2nd ed., 2003.
On-line
-
Yousef Saad, Numerical Methods for Large Eigenvalue Problems. Society for Industrial and Applied Ma-
thematics, Philadelphia, PA, 2nd ed., 2011.
On-line
- A. A. Samarski, J. S. Nikołajew. Metody rozwiązywania równań siatkowych. PWN 1988.
-
Barry F. Smith, Petter E. Bjorstad, William D. Gropp. Domain decomposition. Cambridge Univer-
sity Press, Cambridge, 1996. Parallel multilevel methods for elliptic partial differential equations.
-
Andrea Toselli, Olof Widlund. Domain decomposition methods - algorithms and theory, wolumen 34
serii Springer Series in Computational Mathematics. Springer-Verlag, Berlin, 2005.
-
J. F. Traub. Iterative Methods for the Solution of Equations. Englewood Cliffs, New York, 1964.
LAB
Some classes will be held in computer lab.
link to Octave (one can download linux or windows version of octave)
octave-forge - octave extension
octave manual in html
Program of labs and octave scripts with solution of some problems
- Lab 1 (10 X 2011)- introduction to octave; direct methods for solving system of linear equations - LU factorization
mo2basic.m - a simple octave script
with basic operations like matrices multiplications, basic structures like loops etc
- Lab 2 (Oct 24th, 2011) - direct methods for solving linear systems (LU and L'L factorizations), sparse matrices, Jacobi method
-
Check how functions lu() i chol() work. Using chol() check if
tridiagonal matrix with -1 on sub- and super diagonals and zero on the
main diagonal
is positive definite.
Compare times of solving system with T_N i.e. tridiagonal matrix
with 2 on the diagonal and -1 on sub- and super diagonals, using \ and
lu() or chol().
Create this matrix in sparse format using sparse() octave function
NOT creating any full matrix. Compare time of solving the system with
T_N in the sparse or full T_N format.
Find the max N for which octave solver works (both for T_N
sparse/full)
Examples and exercises in section 1.3.1
of lect notes.
-
Test octave script to example 1.8 in lect note i.e
we want to test the convergence of Jacobi method for A=B+pI with p a parameter.
We choose p in order to get a diagonally dominant matrix
-
Solve exercise 1.9, i.e check how the value of p influence the
convergence rate of the Jacobi method;
Test the time of solving the system with respect to N; test if for
N=100 the Jacobi method is it reasonable to use it (comparing to the
direct method)
-
exercise 1.10: add or modify the lines of code to the script of
example 1.8 in which the rate of convergence in max norm is tested;
(the rate of the norm reduction) and compare with the theoretical
results. Test if the symmetry
does influence the convergence rate.
-
Test octave script to example 1.9 in lect note we want to test the Jacobi method applied to the system with
T_N+p*I - (T_N tridiagonal with 2 on the main diagonal, and -1 on the remaining diagonals) for p>=0 -
we will test the time for solving the system by the Jacobi method and compare it with the time
of the direct solver.
-
exercise 1.11: explain why for large p - the convergence is faster; increase N and repeat the
the tests: which method wins for large N? Is it dependent on p?; Decrease tol to the level fl error e.g. take TOL=1e-16 -
which method is now better?
-
-
Lab 3 (Nov 14th, 2011) - Jacobi method, cont., Gauss-Seidel, SOR
methods and Richardson methods. Projection methods: steepest descent
method.
Test examples 1.10, 1.12; solve exercises 1.13 and 1.15 in section 1.3.1
of lect notes:
octave script to example 1.10:
(tests of Jacobi and Gauss-Seidel methods for B+pI - B random matrix; p parameter);
octave script to example 1.12:
(tests of Jacobi, Gauss-Seidel and SOR methods for T_N+pI - T_N tridiagonal [-1,2,-1]; p parameter)
-
Implement Richardson method e.g. modifying script of example 1.10.
Test Richardson method with parameter [10,5,1,1/5,1/0,1/100] for B=A^TA
- B random matrix
as in example 1.8.
- Test Richardson in 2nd norm for T_N N=1000 with known
solution for different values of parameter, and then
with optimal parameter (why do we know it?) for T_N for
N=10,100,1000, 10000 etc Test convergence of Richardson for T_N and
optimal parameter
but in energy norm. Does the results confirm the theory? Compare
times with the octave direct solver (backslash).
Test Richardson method for T_N+p*I for varying values of parameters
and random initial starting vectors.
-
Test example 1.13 in section 1.5.1:
octave script to example 1.13 (tests of Jacobi, Gauss-Seidel and steepest descent methods for B'B+pI - B random matrix; p parameter)
-
Test steepest descent method x_{k+1}=x_k+ p_k r_k for
p_k=(r_k^Tr_k)/(r_k^TAr_k) for the same matrices as in the tests of
Richardson method
i.e. T_N for N=10,100,1000 etc and A'A with a random A.
Compare the results. (SDM is implemented in example 1.13)
-
Implement and test minimal residuum method x_{k+1}=x_k+ p_k r_k for
p_k=(r_k^TA^Tr_k)/(r_k^TA^TAr_k) for the same matrices as in the tests
of Richardson method and B random matrix and
T_N+ (1/N)*T_f where T_f - tridiagonal with zeros on the main
diagonal, ones on super-diagonal and minus ones on sub-diagonal.
Test also the method for the matrix for A=[0,1;1,0] and x_0=0,
b=[1,0]^T or in general for N-cycle permutation matrix in R^N :
Av_k=v_{k+1} - v_k kth unit vector with v_{N+1}=v_1.
- Lab 4 (Nov 28th,2011) Tests of CG method.
- Test octave function pcg(), i.e. read help and run it for matrix [2,1;1,2] with a random right hand side.
- Run script from example 2.1 from
Section 2.1.1 from lecture notes:
script with the code of example 2.1 , Continuation of example 1.13. We compare 4 methods: Jacobi,
SOR, steepest descent and cg for a matrix
A = B' B + pI,
where p > 0 , B is sparse and random. We increase p which improves also condition of A .
SOR parameter was set as \omega= 1.3 .
- Exercise 2.7.
Check in example 2.1 if the convergence speed of cg and the steepest descent method depends on cond(A).
To check cond(A) use cond() octave function.
-
Exercise 2.8. Modify code of example 2.1, to check if A is nonsymmetric or non-definite, then the convergence rate of cg will change
(and if so, how it is changed). take A = B + pI for p > 0 (no symmetry) selected in such a way, that:
A_{sym} > 0 and A = B' B + pI for p < 0 such that A ahs both positive and negative eigencvalues.
- Run script from example 2.2:
script with the code of example 2.2 ,
We want to compare 4 methods: Jacobi,
SOR, steepest descent and cg for a matrix T_N (1D laplacian matrix). As a parameter for SOR we take the optimal value cf. example 1.11.
- Modify the code of example 2.2 in order to test CG for T_N+pI for p=0,5,10,20,50 for N=25 and 50.
- Modify the code of example 2.2 in order to test cg for P_N - a matrix
of 2D laplacian discretized on a uniform mesh
script with code which creates this matrix
- Lab 5 (Dec 12th,2011) Tests of multigrid
- Lab 6 (january ,2012) Tests of GMRES method.
- Download to your computer octave m-file with a function:
gmres() .
Test this octave function gmres(), i.e. read help and run it for matrix [2,1;3,2] with a random right hand side.
- GMRES for spd matrices. Change script from example 2.1 adding tests for gmres().
(Section 2.1.1 from lecture notes:
script with the code of example 2.1 , Continuation of example 1.13.) Thus we compare 5 methods: Jacobi,
SOR, steepest descent, cg and gmres for a matrix
A = B' B + pI,
where p > 0 , B is sparse and random. We increase p which improves also condition of A .
SOR parameter was set as \omega= 1.3 .
-
Modify code of example 2.1, to check the convergence fo GMRES for random nonsymmetric function.
Take B - random 25X25 and then test GMRES for the systems Ax=b with:
- A = B + pI for p > 0 (no symmetry but positive definite for larger p)
- A = B'B-pI for p>0 (so A is symmetric but not necessarily positive definite)
- A - orthogonal e.g. find QR factors of random B and take A=Q
- Modify script of example 2.2:
script with the code of example 2.2 ,
We want to compare 5 methods: Jacobi,
SOR, steepest descent, cg and GMRES for a matrix T_N +pI (T_N - 1D laplacian matrix). As a parameter for SOR we take the optimal value cf. example 1.11.
Test for p=0,5,10,20,50 for N=25 and 50.
- Modify the code of example 2.2: in order to test gmres for T_N + (1/N)T_f + p I for p=0,5,10,20,50 for N=25 and 50,
where T_f - tridiagonal with zeros on the main
diagonal, ones on super-diagonal and minus ones on sub-diagonal.
- Modify the code of example 2.2: in order to test gmres for P_N +pI - a matrix
of 2D laplacian discretized on a uniform mesh
script with code which creates this matrix
- Multigrid for T_N - test smoother - take random s a solution, compute b=T_Ns and use 3-4 iterations of Richardson with parameter a=0.25 with random x0
and plot
the errors: x0-s and x-s (x obtained by 2-3 R. iterations).
Compute now plots of eigenvalues of I- aT_N for a =1,0.5,0.25,0.125 for N=100. Find using eigs() eigenvalues and eigenvalues of T_N - plot eigenvectors for
the largest and the smallest eigenvalues i.e. Q(:,1) and Q(:,N).
My home page
Last update: December 5th, 2011