Some Final Project Possibilites

April 2, 2021

Brief descriptions for now; more details available on ones that you are interested in.

Minimizing Functions of One and Several Variables

Finding the minimum of a function of one variable is covered briefly in this course, but the extension to functions of several variables introduces some interesting geometrical ideas and new methods. Finding the points where all partial derivatives are zero is usually not the best approach and indeed instead the opposite is sometimes done: converting systems of simultaneous linear equations into a minimization problem for the sake of using more robust algorithms.

Root-finding by Repeated Inverse Quadratic Approximation with Bracketing

This method starts with the bracketing strategy of the Bisection Method combined with inverse quadratic interpolation to find a new approximation at each step in place of the linear interpolation used in Regula Falsi; it reached its final perfected form only in the 1973 publication of Richard Brent after contributions from various other authors, and we will explore some of the steps in this development.

The basic strategy is to start each step with three approximations of the root, in order to fit a quadratic to the inverse: treating the three x values as given by a quadratic function of their y-values. The new triple is produced by replacing one of the previous three points by the root of this quadratic approximation, making sure that there is still a sign change amongst the new triple of points.

A More Reliable Secant Method

The secant and Newton’s method for finding a zero of a function can fail to converge and even have latter iterations become less accurate when a small denominator causes the ap- proximation to jump far from the root (the universal peril when a computation almost fails due to division by zero). The slow but steady bisection method avoids this by progressively bracketing the zero in a shrinking interval: one can modify the secant method to also do such bracketing.

The object of this project is to develop, explain, program and test such a zero finding method. One possible approach is to follow the Secant Method, except that when it would keep two points on the same side of the root, use instead a bisection step.

Cost and reliability should be compared to both the Secant and Bisection Methods, for example by counting evaluations of f(x) and measuring elapsed time.

Computing Eigenvalues and Eigenvectors

This is one of the most important computational topic that does not fit into the core syllabus of Math 245. Eigenvalues are given by a nonlinear equation (the characteristic polynomial) and so it should be no surprise that their computation involves iterative methods. A project will explore the basic methods of Power Iteration and the Inverse Power Method, and perhaps advances beyond those.

Iterative Methods for Solving Simultaneous Linear Equations, \(Ax = b\)

When most coefficients in a system of equations are zero, the standard method based on row reduction involves many redundant steps (multiply by zero, add zero), and iterative methods can be far faster: computing successively better approximations of the solution by an iterative step that is far faster than exact solution by row reduction methods. This is one of the main areas of ongoing research in numerical methods for linear algebra.

The “Scientific Python” package SciPy (which is included in Spyder and Anaconda) has some built-in features for implementing these various algorithms.

Solving Simultaneous Nonlinear Equations

Newton’s method extends naturally to solving simultaneous nonlinear equations, with divi- sion by the derivative replace by solving equations given by a matrix of partial derivative values.

A project will explore the basic multi-dimensional Newton’s method, and some variants designed to improve speed by reducing the time spent solving systems of simultaneous linear equations at each iteration.

Fitting Smooth Piecewise Cubic Functions to Data

Fitting a collection of cubics to different parts of a domain can produce an approximation that is good in the sense of having small errors, but which has “sharp corners” where the cubics meet, and so is both visually unsatisfactory, and clearly gives very poor approximations of derivatives at these points.

A project will explore at least one of the several widely-used methods for computing a collection of cubics that approximate a function well, or fit data plausibly, and are also reasonably smooth at the joints between different cubics and give accurate approximations of derivatives as well.

Least-Squares Fitting to Data and Functions

We have covered some methods for least squares fitting, such as finding the best straight line fit to a collection of data. A project will explore a selection other applications, such as fitting more general curves to data, including trigonometric functions when the data come from a periodic process.

There is some interesting linear algebra here, involving finding the “best” approximate so- lution to a system of simultaneous equations with more equations than unknowns (the case of a rectangular matrix with more rows than columns).

Again Python packages like NumPy and SciPy have commands for some cases.

Boundary Value Problems for Differential Equations

Two main strategies exist for computing solutions to boundary value problems such as

\[-u''(x) + p(x) u'(x) + q(x) u(x) = f(x), u(0) = \alpha, u(L) = \beta\]

One approach is simultaneous equation solving: describe the solution approximately by the values of function u(x) at a collection of points and approximate these values by the solution of a system of simultaneous equations. If the differential equation is linear, so are the equations so we know what to do.

The other is shooting: use methods for solving initial value problems with data given only at x = 0, and then by trial and error, or equation solving, find the solution that also satisfies the condition at x = L.

The project will implement and test one or both of these strategies, keeping to solving only a single nonlinear equation or simultaneous linear equations unless one is very ambitious; Python packages can then easily take care of all the equation-solving computations.


This work is licensed under Creative Commons Attribution-ShareAlike 4.0 International