Discuss the following basic idea of the chapter
- Write the algorithms of all methods
- List the similarity and difference between all methods
Q-1
(a)algorithm of bisection method
Start
Obtain a function f in the appropriate form (f(x)=0⇔x=g(x)) and assume that a root α exists. Obtain an initial approximation x0, a maximum number of iterations, and an error tolerance ϵ.
For i=1,2,... up to the maximum number of iterations prescribed:
Step 1: Obtain the successive approximations by the fixed point method with the following formula:
(1)Step 2: Check the error tolerance:
(2)If the above inequality is true, then stop. xn is a good approximation of the root α. If the inequality above is false, then continue to compute successive approximations until the maximum number of iterations is reached. If the maximum number of iterations is reached and the error tolerance ϵ is not obtained, then print out a failure message.
Obtain a function f and assume that a root α exists. Obtain an initial approximation x0 to this root. Also obtain a maximum number of iterations allowed and an error tolerance ϵ.
For i=1,2,... up to the maximum number of iterations prescribed:
Step 1: Obtain the successive approximations of the root α with the following formula:
(1)Step 2: Check the error tolerance:
(2)If the above inequality does not hold, then continue to obtain successive approximations for α. If the above inequality is true, then further verify the accuracy of xn. Check that to see whether or not:
(3)If the signs of f(xn+ϵ) and f(xn−ϵ) are opposites of each other, then the accuracy of xn is verified and stop the algorithm. xn is a good approximation to α. If the signs of f(xn+ϵ) and f(xn−ϵ) are the same, then α is not contained in the small interval [xn−ϵ,xn+ϵ]. Print out an error message.
If the maximum number of iterations is reached, then print our a failure message.
Q-2 -The Bisection method
v -The bisection method does not (in general) produce an exact solution of an equation f (x) = 0. However, we can give an estimate of the absolute error in the approxiation.
v Very stable Algorithm - Good technique to find starting point for Newton’s method
v Costs only one function evaluation, so rapid iterations
v Linear convergences, so slow (3.3 iterations/digit)
v The Secant method
· The secant method is a variant of Newton’s method, where f0(xn) is replaced by its finite difference approximation based on the evaluated function values at xn and at the previous iterate xn−1.
· Assuming convergence, observe that near the root
f0 (xn) ≈ f (xn) −f (xn−1)/ xn −xn−1
· Substitution of this approximation into the formula for Newton’s method yields the Secant method,
Xn+1 =
F (xn)(xn −xn−1) f (xn)−f (xn−1)
, n = 0,1,2,3, ···
· Hard to find starting points (Unknown basin of attraction)
· Costs only two function evaluations, so rapid iterations
· Superlinear convergences, α ≈ 1.62, which is pretty fast
v The Newton’s method
ü Newton’s method is an extremely powerful technique, but it has a major weakness; the need to know the value of the derivative of f at each approximation.
ü Frequently, f0(x) is far more difficult and needs more arithmetic operations to calculate than f (x).
ü Hard to find starting points (Unknown basin of attraction)
ü Finding and evaluating derivative requires more machine work at each iteration
ü Quadratic convergences is very fast- doubling the digits at each iteration
Ans: - The basic similarity of all methods?
Ø All of the methods to determine of Root
Ø The error decreases slowly at first but then rapidly after a few iterations
The method is guaranteed to converge