Iteration is a common approach used in a wide variety of numerical
methods. It is the hope that from some initial guess of the
solution of a given problem, an iteration in the general form of
will eventually converge to the true solution
at the limit
, i.e.,
|
(69) |
This point is also called a fixed point. The concern
is whether this iteration will converge, and, if so, how quickly
or slowly, measured by the rate of convergence in terms of
the error
:
|
(70) |
where is the order of convergence. We assume when
is large enough, the error is much smaller than 1, i.e.,
.
This expression may be better understood when it is interpreted as
when
. Obviously, the larger
and the smaller , the smaller and the more quickly
the sequence converges. Specifically, the convergence is
- sublinear if and ,
;
- linear if and ,
,
;
- superlinear if and , or (with any ).
In particular, the convergence is quadratic if ,
,
or cubic if ,
, etc.
The iteration
can be written in terms of the errors
and . Consider the Taylor expansion around the true
solution now denoted by :
|
(71) |
Subtracting from both sides, we get
|
(72) |
At the limit
, and all non-linear
terms approach zero, we have
|
(73) |
If
, then the convergence is linear. However, if ,
then
and |
(74) |
the convergence is quadratic. Moreover, if , then
and |
(75) |
the convergence is cubic. We see that more lower order terms of the
iteration function vanish at the fixed point, the more quickly the
iteration converges.
Examples: All iterations below converge to zero, but their order
and rate of convergence are different:
From these examples we see that there is a unique exponent , the
order of convergence, so that
|
(79) |
In practice, the true solution is unknown and so is the error
. However, the convergence can still be estimated, if the
convergence is superlinear, i.e., it satisfies
|
(80) |
Consider:
i.e., when
,
|
(82) |
The order and rate of convergence can be estimated by
|
(83) |