NEWS:
 The Final Exam will be on Tuesday, May 16, 1:303:30pm in MTH0409.
You can bring a cheat sheet: letter size, twosided, must be handwritten.
The final exam covers the topics of the two exams (see below), and Numerical Integration.
 I will post the solution of Assignment 7 on Tue, May 9 at 10pm. No homeworks will be accepted after that time.
 Assignment 7 was handed out on May 3. It is due on Monday, May 8.
Solution: Problems 1,2 , Problem 3
Problem 2: It should say: We want to approximate integral I = ...
Problem 3: The exact value for the integral is I=0.460730475133672 (computed with vpaintegral)
Problem 3(a):
In order to find the number of function evaluations write your function as an mfile f.m as follows: E.g., for f(x)=x·sin(x) we would use
function y = f(x)
global NEV % keep track of number of function evaluations NEV
NEV = NEV + length(x);
y = x.*sin(x); % IMPORTANT: use .* ./ .^ instead of * / ^
and then use e.g.
global NEV % declare NEV as global variable
NEV=0; Q = integral(@f,a,b,'RelTol',1e3,'AbsTol',1e3)
NEV % number of function evaluations used for "integral"
Problem 3(b),(c): Look at the example for GaussLegendre quadrature and GaussJacobi quadrature below. Choose alpha, beta appropriately for 3(c).
 Exam 2 was on Monday, May 1.
Solution
Practice problems
It will cover
 interpolation with polynomials: divided difference algorithm, error formula
(piecewise polynomials and splines will NOT be on the exam)
 linear least squares problem: (i) normal equations, (ii) orthogonalization
You do NOT have to perform GramSchmidt to find decomposition A=PS or Q=QR. But if you are given A=PS or A=QR you have to know how to use it to solve the least squares problem.
 one nonlinear equation: bisection method, secant method, Newton method
 nonlinear system: Newton method
Contraction mapping theorem
 you need to know how to use the Matlab commands qr, \, fzero, fsolve.
Nonlinear least squares and
numerical integration will NOT be on the exam.
Solution of practice problems
 I posted the solution of Assignment 6 on April 27. No late homework submissions will be accepted on April 27 or later.
 Assignment 6 was DUE ON APRIL 25, 9pm at my office MATH4409 (write submission day/time on homework)
Solution: Problem 1, Problem 2, Problem 3, Problem 4, Problem 5

Read Numerical Integration (updated!), Adaptive numerical integration, how to use the Matlab command integral(see below)
 Nonlinear least squares problem: Example for GaussNewton method and lsqnonlin
 Assignment 6 was handed out on April 14, it is due on April 24.
 Assignment 5 was handed out on March 31, due on April 10.
Solution: Problem 1, Problem 2, Problem 3, Problem 4
 How to import CO_{2} data from NOAA into Matlab
 Read the notes on the linear least squares problem
and the example for least squares problem
 Read the material about piecewise linear interpolation, piecewise cubic Hermite interpolation, cubic spline interpolation.
Look at the example for piecewise cubic interpolation methods
 Read the material about interpolation with multiple nodes
 Example from class: Polynomial interpolation with equidistant and Chebyshev nodes
 Exam 1 was on Wed, March 15. solution
The exam covers the material of Assignments 1 through 4:
 Taylor approximation, error propagation, roundoff error, condition number, numerically
stable/unstable algorithms
 Gaussian elimination without/with pivoting, Cholesky decomposition, norms, condition of a matrix, residuals
 Matlab commands lu, \, chol
The following topics will NOT be on the exam:
invnormest, sparse matrix type in Matlab, interpolation.
practice problems
solution
 Try out polynomial interpolation!
 Assignment 4 was posted on Feb. 27, it is due on March 6.
Solution: Problems 1,2,
Problems 3,
Problems 4, Problems 5
 Assignment 3, due Feb. 24 (posted Feb. 16)
Solution:
problem 1,
problem 2,
problem 3
 Assignment 2 was handed out on Feb. 3, it was due on Feb. 13.
Solution
 Assignment 1 was handed out on Jan. 27, it was due on Feb. 6.
Solution of Assignment 1
 Please read the information below about Matlab.
Course information
Information about exams, homeworks, grades
Additional Course Material (Required Reading)
 Introduction: Errors, Taylor Theorem
 Error propagation, forward error analysis, numerical
stability
 Examples for unavoidable error, actual error
(mfile)
 How to compute 1cos(x)
(mfile)
 IEEE 754 machine numbers and machine
arithmetic
 Solving Linear Systems
 How to solve a linear system in Matlab:
 [L,U,p] = lu(A,'vector'); % Gaussian elimination choosing the pivot candidate with largest absolute value
y = L\b(p) % forward substitution with permuted vector b
x = U\y % back substitution
 Matlab shortcut
x = A\b
This performs the three steps from above, but you can't save L, U, p. Don't use this if you solve several linear systems with the same matrix.
 Example for Gaussian Elimination with Pivoting
 Errors for Linear Systems
 Estimating the condition number if L,U are known: Download invnormest.m.
For the 1norm condition number use
c1 = norm(A,1)*invnormest(L,U)
For the ∞norm condition number use
ci = norm(A,Inf)*invnormest(U',L')
For the Cholesky decomposition use invnormest(C',C).
The mfile invnormest_simple.m explains the basic algorithm for estimating A^{1}
 How to solve sparse linear systems in Matlab:
A sparse matrix is a matrix where most elements are zero. In this case it is much more
efficient to use the special sparse data structures in Matlab. All operations like *, \, lu
have special efficient algorithms for matrices in sparse format. In particular the backslash command
x=A\b uses clever row and column permutations to minimize "fillin".
 initializing a sparse matrix: A sparse matrix can be described by giving a list of the nonzero elements, as
vectors iv (contains ivalues), jv (contains jvalues), av (contains matrix entries).
Then a sparse matrix is initialized by
A = sparse(iv,jv,av);
Example: The full matrix given by A = [0 7 0; 0 0 8; 9 0 0] can be defined as a sparse matrix by
A = sparse([1 2 3],[2 3 1],[7 8 9])
 Solving a linear system with a sparse matrix:
 [L,U,p,q] = lu(A,'vector'); % Gaussian elimination with row and column permutations
% (chosen to minimize "fillin"): L*U gives the permuted matrix A(p,q)
y = L\b(p) % forward substitution with permuted vector b
x = U\y; x(q) = x % back substitution, then undo permutation given by q
 or just use
x = A\b
which does the above steps automatically (but without the option to save
the result of the LUdecomposition).
 Cholesky decomposition in Matlab: (only works if A is symmetric positive definite)
 C = chol(A), then C'*C gives A
 for sparse matrix: find row/column permutation vector p to minimize "fillin":
[C,k,p] = chol(A,'vector'), then C'*C gives the permuted matrix A(p,p)
(ignore the return value k which is zero for a positive definite matrix)
 Gaussian elimination WITHOUT/WITH pivoting, Cholesky decomposition
 Interpolation (updated March 13)
 Try out polynomial interpolation!
 Divided difference algorithm for distinct nodes x_{1},...,x_{n}
and given function values y_{1},...,y_{n}:
divdiff computes the divided differences
d_{1}=f[x_{1},...,x_{n}],
d_{2}=f[x_{2},...,x_{n}],...,d_{n}=f[x_{n}]
Matlab code: divdiff.m
evnewt evaluates the interpolating polynomial at the point t
Matlab code: evnewt.m
 Example: Polynomial interpolation with equidistant and Chebyshev nodes
(mfile)
 Equidistant nodes on the interval [a,b] are given by
h = (ba)/(n1); x_{j} = a + (j1)h
for j = 1,...,n
Matlab: x = linspace(a,b,n)
Chebyshev nodes on the interval [a,b] are given by
x_{j} = (a+b)/2 + cos(π(j1/2)/n)·(ba)/2 for j = 1,...,n.
The node polynomial has the bound
(xx_{1})...(xx_{n}) ≤ 2 ((ba)/4)^{n} for x in [a,b]
 Piecewise polynomial interpolation This covers piecewise linear interpolation,
piecewise cubic Hermite interpolation, complete cubic spline, notaknot cubic spline
You can ignore the formulas for the piecewise cubic Hermite interpolation and for the cubic spline interpolation. You just need to know how to compute this in Matlab, see right below.
 Piecewise cubic interpolation in Matlab: The column
vector
x
contains the xcoordinates of the nodes, the column
vector y
contains the function values at the nodes, the
vector yp
contains the derivatives at the nodes. The vector
xi
contains the points where we want to evaluate the
interpolating function, the vector yi
contains the
corresponding values of the interpolating function.
 Cubic Hermite interpolation: Download
hermite.m
. Then use yi =
hermite(x,y,yp,xi)
 Complete cubic spline: points are given by column
vectors x, y. Slopes at left and right endpoint are
sl and sr.
yi = spline(x,[sl;y;sr],xi)
 Notaknot cubic spline:
yi =
spline(x,y,xi)
Example for piecewise cubic interpolation
 historical origin of "splines":
"used principally by ship architects ... splines are thin, tapering pieces of lancewood or red pine" in "Treatise on Mathematical Drawing instruments" from 1868
 "Curve fitting" for data with measurement errors: Linear least squares method
 Nonlinear equations
 Nonlinear system: Example for Newton method and fsolve (mfile)
 Solving a nonlinear equation with Matlab:
We want to find a zero of the function f in the interval [a,b] where f(a) and f(b) have
different signs:
xs = fzero(@f,[a,b])
Here the function f is given in the mfile f.m. If the function f is defined by an @expression use xs=fzero(f,[a,b])
.
Example with @function:
f=@(x)sin(x); xs=fzero(f,[3,4])
 Solving a nonlinear system in Matlab using fsolve:
(without using the Jacobian)
Write an mfile f.m containing the function f:
function y = f(x)
y = ...
Then use x = fsolve(@f,x0) where x0 is the initial guess.
Example: See the bottom of the first page of Nonlinear equations
You can specify additional options using optimset, e.g. the tolerance for function value and the tolerance for x:
opt = optimset('TolFun',1e5,'TolX',1e5);
x = fsolve(@f,x0,opt);
There are many other options (e.g., display each iteration, or use the Jacobian) , type doc fsolve for more information.
 Fixed Point Iteration and Contraction Mapping Theorem (proof of NewtonKantorovich theorem)
 Nonlinear least squares problem:
We have a function y=f(x) where x is a vector of length n and y is a vector of length N, with
N≥n. We want to find x such that f(x)_{2} is minimal.
Simple example with n=2, N=3:
Let f(x) = [x_{1}.4, x_{2}.8,
x_{1}^{2}+x_{2}^{2}1]^{T},
use initial guess [0,0]^{T}
 GaussNewton method: gaussnewton.m
 How to use lsqnonlin (without Jacobian):
f = @(x) [x(1).4; x(2).8 ; x(1)^2+x(2)^21];
[x,resnorm2] = lsqnonlin(f,[0;0])
% resnorm2 is f(x)^{2}
There are many other options (e.g., display each iteration, or use the Jacobian) , type doc lsqnonlin for more information.
 Nonlinear least squares problem: Example for GaussNewton method and lsqnonlin
 Numerical Integration (updated 05/03)
 Adaptive numerical integration
 Numerical integration with Matlab: Q = integral(f,a,b)
Example: Find the integral of exp(x^2) from 0 to 1:
f = @(x) exp(x.^2); Q = integral(f,0,1)
Note:
 Write your function f(x) so that it works for a vector x and returns a vector f(x) of function values f(x_{1}),...,f(x_{n}): use .* ./ .^ instead of
* / ^ in the definition of f.
Example: for I = ∫_{0}^{1}e^{x^2}dx use
f = @(x) exp(x.^2); % works for vectors x
Q = integral(f,0,1)
 If your function f(x) only works for scalars x you MUST use the option
'ArrayValued','True':
f = @(x) exp(x^2); % works only for scalars
Q = integral(f,0,1,'ArrayValued',true)
 You can specify tolerances for the relative and absolute error:
Q = integral(f,a,b,'RelTol',1e13,'AbsTol',1e13)
 Gauss quadrature: Read the Notes about Gauss quadrature
 GaussLegendre quadrature
∫_{0}^{1}f(x)dx
is approximated by w(1)*f(x(1))+...+w(n)*f(x(n))
∫_{a}^{b}f(x)dx
is approximated by r*(w(1)*f(t(1))+...+w(n)*f(t(n))) where
r=(ba)/2; t=(a+b)/2+r*x
Method from class. But
GaussLegendre.m is more accurate and efficient.
 GaussJacobi quadrature
with α,β>1
∫_{0}^{1}(x+1)^{α}(1x)^{β}f(x)dx
is approximated by w(1)*f(x(1))+...+w(n)*f(x(n))
∫_{a}^{b}(xa)^{α}(bx)^{β}f(x)dx
is approximated by r^(1+alpha+beta)*(w(1)*f(t(1))+...+w(n)*f(t(n))) where
r=(ba)/2; t=(a+b)/2+r*x
 GaussLaguerre quadrature with α>1, b>0
∫_{0}^{∞}x^{α}e^{x}f(x)dx
is approximated by w(1)*f(x(1))+...+w(n)*f(x(n))
∫_{0}^{∞}x^{α}e^{bx}f(x)dx
is approximated by b^(1alpha)*(w(1)*f(t(1))+...+w(n)*f(t(n))) where t=x/b
Examples for GaussLegendre and GaussJacobi quadrature
Matlab programming
We will use Matlab to see how various algorithms work.
 Matlab can be downloaded for free from TERPware.
 Gentle introduction to Matlab (also explains how to use the Matlab interface, how to publish)

We will NOT use SYMBOLIC Matlab commands (like sym, syms, ezplot) in this class. (We will, however, use vpa in the first assignment.)
 Matlab Primer
gives a concise summary of the most important Matlab commands (but it was written for an older version of Matlab)
How to hand in Matlab results for homeworks:
 You have to write an mfile for each problem. (Typing commands at the >> prompt and printing this out is not allowed)
 You have to hand in all mfiles together with the generated output and graphics.
The best way is to use the publish command in Matlab
(if you call additional mfiles you have to print them out separately).
 Only print out values which are asked for in the problem. (Make sure you have a semicolon at the end of lines.)
Label all numerical output and graphics:
How to label output and graphics in Matlab
 Hand in additional pages (this can be handwritten) which show the "paper and pencil work", and answer all the questions asked in the problem.
How to use the publish command in Matlab: