Here we consider a set of methods that find the solution of a single-variable nonlinear equation , by searching iteratively through a neighborhood of the domain, in which is known to be located.
The bisection search
This method requires two initial guesses satisfying . As and are on opposite sides of the x-axis , the solution at which must reside somewhere in between of these two guesses, i.e., .
Given such two end points and , we find another point in between and use it to replace one of the end points at which the function value has the same sign as that of . The new search interval is if , or if , either of which is smaller than the previous interval . This process is then carried out iteratively until the solution is eventually approached, with a guaranteed convergence.
While any point between the two end points can be chosen for the next iteration, we want to avoid the worst possible case in which the solution always happens to be in the larger of the two sections and . We can therefore choose the middle point between the two end points, i.e., , so that , i.e., the new search interval is always halved in each step of the iteration:
11Here, without loss of generality, we have assumed and is replaced by .
For example, we assume the root is located at between the two initial values are and , then we get
(12) |
Note that the error does not necessarily always reduce monotonically. However, as in each iteration the search interval is always halved, the worst possible error is also halved:
(13) |
Compared to other methods to be considered later, the bisection method converges rather slowly, but one of the advantages of the bisection method is that no derivative of the given function is needed. This means the given function does not need to be differentiable.
The Secant method
Same as in the bisection method, here again we assume there are two initial values and available, but they do not have to satisfy . Here the iteration is based on the zero-crossing of the secant line passing through the two points and , instead of their middle point. The equation for the secant is:
At the zero crossing, and the equation above can be solved for , the position of the zero-crossing:
(16) |
(17) |
(18) |
In case , the approximated derivative is zero , and cannot be found. In such a case, a root may exist between and . Therefore the way to resolve this problem is to combine the bisection search with the secant method so that when , the algorithm will switch to bisection search. This is Dekker's method:
(19) |
We now consider the order of convergence of the secant method. Let be the root at which . The error of is:
(20) |
(21) |
(22) |
(23) |
(24) |
(25) |
(26) |
(27) |
(28) |
(29) |
(30) |
(31) |
(32) |
The inverse quadratic interpolation
Similar to the secant method that approximates the given function by a straight line that goes through two consecutive points and , the inverse quadratic interpolation method approximates the function by a quadratic curve that goes through three consecutive points . As the function may be better approximated by a quadratic curve rather than a straight line, the iteration is likely to converge more quickly.
In general, any function can be approximated by a quadratic function based on three points at , , and by the Lagrange interpolation:
(33) |
(34) |
(35) |
Brent's method
Brent's method combines the bisection method, secant method, and the method of inverse quadratic interpolation. In the iteration, a set of conditions is checked so that only the most suitable method under the current situation will be chosen to be used in the next iteration.