Numerical Analysis Sample Project on Newtons’s Method

Brenton LeMesurier

Draft of February 24, 2021

The Python code for the algorithms is imported from the module newton_project.py

Newton’s method for solving \(f(x) = 0\)

Newton’s method is based on the iterative formula $\(x_{n+1} = x_n-f(x_n)/f'(x_n).\)$

A good algorithm needs conditions to stop the iteration, deal with failures like non-convergence and division by zero, and for efficiency, should not evaluate any function twice for the same argument \(x_n\). The algorithm here stops when any of three conditions is met:

  • The estimated error \(|x_n - x_{n-1}|\) is below a specified error tolerance,

  • The backward error \(|f(x_n)|\) is below a specified error tolerance, or

  • A specified maximum number of iterations have been performed.

To help assess the cost of this method, output of the total number of function evaluations is added.

Algorithm

Input:

  • Functions \(f(x)\) and its derivative \(Df(x)\)

  • Initial approximation \(x_0\)

  • Tolerances \(x_{tol}\) for the error in the \(x\) value and \(f_{tol}\) for the backward error (how close \(f(x)\) is to zero)

  • A limit on the number of iterations to attempt, iterationsMax.

Output:

  • \(x\), the final approximation

  • errorEstimate, the estimated absolute error in this approximation

  • backwardError, the backward error \(|f(x)|\)

  • functionEvaluations, the number of evaluations of the functions \(f(x)\) and \(Df(x)\).

\(x \leftarrow x_0\)
\(\text{fx} \leftarrow f(x)\)
\(\text{functionEvaluations} \leftarrow 1\)
for iteration from 1 to iterationsMax:
\(\quad\) \(\text{Dfx} \leftarrow Df(x)\)
\(\quad\) \(\text{functionEvaluations} \leftarrow \text{functionEvaluations} + 1\)
\(\quad\) if \(\text{Dfx} = 0\) Comment: cannot continue; division by zero.
\(\quad\quad\) End (i.e. leave this algorithm), with a warning about failure to converge.
\(\quad\) end if
\(\quad\) \(\delta x \leftarrow -fx/Dfx\)
\(\quad\) \(x \leftarrow x + \delta x\)
\(\quad\) \(\text{fx} \leftarrow f(x)\)
\(\quad\) \(\text{functionEvaluations} \leftarrow \text{functionEvaluations} + 1\)
\(\quad\) \(\text{errorEstimate} \leftarrow |\delta x|\)
\(\quad\) \(\text{backwardError} \leftarrow |fx|\)
\(\quad\) if \(\text{errorEstimate} \leq x_{tol}\) or \(\text{backwardError} \leq f_{tol}\)
\(\quad\quad\) End successfully, with output of x, errorestimate, backwarderror, functionEvaluations
\(\quad\) end if
end for
Warning: if we reach here, the loop above finished without returning, so the iterations did not converged to sufficient accuracy.

# The main algorithms
from newton_project import newton
from newton_project import quasiNewton

# For the test cases
from numpy import sin, cos, tan, exp

Example 1: \(x = \cos(x)\)

def f1(x): return x - cos(x)
def Df1(x): return 1.0 + sin(x)
x_0 = 0.0
errortol = 1e-8
fvaltol = 1e-15

print('Using initial approximation x_0 =', x_0,
      'error tolerance', errortol,
      'backward error tolerance', fvaltol)

x, errorestimate, backwarderror, functionEvaluations = newton(f1, Df1, x_0, errortol, fvaltol, verbose=True)

if errorestimate <= errortol or backwarderror <= fvaltol:  # Success
    print('Success!')
else:
    print('The error tolerance was not met.  The best we can say is that')
print('The solution is approximately', x)
print('with estimated error', errorestimate, 'and backward error', backwarderror)
print('The calculation required', functionEvaluations, 'function evaluations.')
Using initial approximation x_0 = 0.0 error tolerance 1e-08 backward error tolerance 1e-15
step number n, x_n, error estimate, backward error:
1 1.0 1.0 0.45969769413186023
2 0.7503638678402439 0.24963613215975608 0.018923073822117442
3 0.7391128909113617 0.011250976928882236 4.645589899088254e-05
4 0.7390851333852839 2.7757526077753238e-05 2.847204694234051e-10
5 0.7390851332151606 1.7012334067709158e-10 1.1102230246251565e-16
Success!
The solution is approximately 0.7390851332151606
with estimated error 1.7012334067709158e-10 and backward error 1.1102230246251565e-16
The calculation required 11 function evaluations.
x_0 = 100.0

print('Using initial approximation x_0 =', x_0,
      'error tolerance', errortol,
      'backward error tolerance', fvaltol)

x, errorestimate, backwarderror, functionEvaluations = newton(f1, Df1, x_0, errortol, fvaltol, verbose=True)

if errorestimate <= errortol or backwarderror <= fvaltol:  # Success
    print('Success!')
else:
    print('The error tolerance was not met.  The best we can say is that')
print('The solution is approximately', x)
print('with estimated error', errorestimate, 'and backward error', backwarderror)
print('The calculation required', functionEvaluations, 'function evaluations')
Using initial approximation x_0 = 100.0 error tolerance 1e-08 backward error tolerance 1e-15
step number n, x_n, error estimate, backward error:
1 -100.83221384870296 200.83221384870296 101.78718050765721
2 43.898456593084916 144.73067044178788 42.90196915432232
3 -2.9245849938401633 46.82304158692508 1.9480388973376201
4 -0.44203136417168265 2.4825536296684807 1.3459159231167994
5 1.9100493194227757 2.3520806835944583 2.2428320738976044
6 0.7557374249453708 1.154311894477405 0.0279714286913334
7 0.7391452994681547 0.01659212547721615 0.00010069630247322436
8 0.7390851340144209 6.016545373382826e-05 1.337651545085805e-09
9 0.7390851332151607 7.992602355197513e-10 0.0
Success!
The solution is approximately 0.7390851332151607
with estimated error 7.992602355197513e-10 and backward error 0.0
The calculation required 19 function evaluations

Example 2: Find the smallest positive root of \(x = \tan(x)\) (more difficult!)

def f2(x): return tan(x) - x
def Df2(x): return 1/cos(x)**2 - 1
errortol=1e-8
fvaltol=1e-15
#x_0 = 4.28  # fails
x_0 = 4.29  # succeeds
#x_0 = 4.71  # succeeds
#x_0 = 4.72  # converges to the wrong root

print('Using initial approximation x_0 =', x_0,
      'error tolerance', errortol,
      'backward error tolerance', fvaltol)

x, errorest, absfx, functionEvaluations = newton(f2, Df2, x_0, errortol, fvaltol)

if errorestimate <= errortol or backwarderror <= fvaltol:  # Success
    print('Success!')
else:
    print('The error tolerance was not met.  The best we can say is that')
print('The solution is approximately', x)
print('with estimated error', errorest, 'and backward error', backwarderror)
print('The calculation required', functionEvaluations, 'function evaluations.')
Using initial approximation x_0 = 4.29 error tolerance 1e-08 backward error tolerance 1e-15
Success!
The solution is approximately 4.493409457909064
with estimated error 7.070815230647819e-12 and backward error 0.0
The calculation required 25 function evaluations.

The Quasi-Newton Method

This speeds iteration by using the original derivative \(Df(x_0)\) at each iteration, instead of updating each time. If this take less than about twice as many iterations as the true Newton method, it can be faster. However, its main benefit is for system of equations, as considered below.

Algorithm

Input:

  • Functions \(f(x)\) and its derivative \(Df(x)\)

  • Initial approximation \(x_0\)

  • Tolerances \(x_{tol}\) for the error in the \(x\) value and \(f_{tol}\) for the backward error (how close \(f(x)\) is to zero)

  • A limit on the number of iterations to attempt, stepsmax.

Output:

  • \(x\), the final approximation

  • errorestimate, the estimated absolute error in this approximation

  • backwarderror, the backward error \(|f(x)|\)

  • functionEvaluations, the number of evaluations of the functions \(f(x)\) and \(Df(x)\).

\(x \leftarrow x_0\)
\(\text{fx} \leftarrow f(x)\)
Dfx_0 \(\leftarrow Df(x_0)\)
\(\text{functionEvaluations} \leftarrow 2\)
for step = 1 to stepsmax
\(\quad\) \(\delta x \leftarrow -fx/Dfx_0\)
\(\quad\) \(x \leftarrow x + \delta x\)
\(\quad\) \(fx \leftarrow f(x)\)
\(\quad\) \(\text{functionEvaluations} \leftarrow \text{functionEvaluations} + 1\)
\(\quad\) \(\text{errorestimate} \leftarrow |\delta x|\)
\(\quad\) \(\text{backwarderror} \leftarrow |fx|\)
\(\quad\) if \(\text{errorestimate} \leq x_{tol}\) or \(\text{backwarderror} \leq f_{tol}\)
\(\quad\quad\) End successfully, with output of x, errorestimate, backwarderror, functionEvaluations
\(\quad\) end if
end for
Warning: if we reach here, the loop above finished without returning, so the iterations did not converged to sufficient accuracy.

Example 3: Repeat the above basic test case \(x = \cos(x)\) with the Quasi-Newton Method — much slower!

x_0 = 0.0
errortol=1e-8
fvaltol=1e-15
print('Using initial approximation x_0 =', x_0,
      'error tolerance', errortol,
      'backward error tolerance', fvaltol)

x, errorest, absfx, functionEvaluations = quasiNewton(f1, Df1, x_0, errortol, fvaltol)

if errorestimate <= errortol or backwarderror <= fvaltol:  # Success
    print('Success!')
else:
    print('The error tolerance was not met.  The best we can say is that')
print('The solution is approximately', x)
print('with estimated error', errorest, 'and backward error', backwarderror)
print('The calculation required', functionEvaluations, 'function evaluations.')
Using initial approximation x_0 = 0.0 error tolerance 1e-08 backward error tolerance 1e-15
Success!
The solution is approximately 0.7390851366465718
with estimated error 8.525458006225506e-09 and backward error 0.0
The calculation required 49 function evaluations.

To Do: test on the harder case, \(x = \tan(x)\)

Observations and Conclusions

These are incomplete (this is only a draft!)

The first things to note are:

  • the sensitivity to initial conditions,

  • the slowness of the quasi-Newton method, and

  • the problems that can arise when there are multiple solutions.


This work is licensed under Creative Commons Attribution-ShareAlike 4.0 International