Richardson Extrapolation

The accuracy of a wide range of numerical methods (such as those discussed in previous sections) for iteratively approximating the value of a certain variable $A$ depends on the step size $h$. In general, reducing $h$ will increase the order of the truncation error $O(h^k)$ of the result. Theoretically, the exact value of $A$ could be approached at the limit $h\rightarrow 0$. Richardson extrapolation is a general method that improves the performance of such an iteration by progressively reducing the step size and thereby increasing the order of the truncation error.

We denote by $A_0(h)$ the approximation of $A$ achieved by a specific method based on step size $h$, and assume the order of the truncation error to be $O(h^{k_1})$:

$\displaystyle A=A_0(h)+C_1 h^{k_1}+C_2 h^{k_2}+C_3 h^{k_3}+\cdots=A_0(h)+O(h^{k_1})$ (105)

where $k_1<k_2<k_3<\cdots$. Replacing step size $h$ by $h/t$ for some $t>1$, we get

$\displaystyle A=A_0(h/t)+C_1 (h/t)^{k_1}+C_2 (h/t)^{k_2}+\cdots=A_0(h/t)+O(h/t)^{k_1}$ (106)

Subtracting the first equation from $t^{k_1}$ times the second above:

$\displaystyle t^{k_1}A-A=t^{k_1}\left[A_0(h/t)+C_1(h/t)^{k_1}+C_2(h/t)^{k_2}+\cdots]
-[A_0(h)+C_1h^{k_1}+C_2h^{k_2}+\cdots\right]$ (107)

we get

$\displaystyle A=\frac{t^{k_1} A_0(h/t)-A_0(h)}{t^{k_1}-1}+O(h^{k_2})
=A_1(h/t)+O(h^{k_2})$ (108)

We have thereby defined a new method with a step size $h/t$:

$\displaystyle A_1(h/t)=\frac{t^{k_1} A_0(h/t)-A_0(h)}{t^{k_1}-1}$ (109)

with a smaller error term of order $O(h^{k_2})$, higher than $O(h^{k_1})$ of the old method $A_0(h)$. This method of Richardson extrapolation can be recursively applied to $A_1$, to obtain yet another method $A_2$ with a still higher order error term of $O(h^{k_3})$.

The recursion of the Richardson extrapolation is illustrated below. Consider a method $A_0(h)$ that approximates the value of $A$ based on a step size $h$:

$\displaystyle A=A_0(h)+C_1h^2+C_2h^4+C_3h^6+C_4h^8+\cdots=A_0(h)+O(h^2)$ (110)

Now we use the Richardson extrapolation with $t=2$ to improve the accuracy of the approximation: We see that every an iteration of the process above is carried out, the order of the error term is increased by 2. We now denote $A_n(h/2^m)$ by $A_{m,n}$, where $m$ is for the step size $h_m=h_{m-1}/2=\cdots=h_0/2^m$ with $h_0=h$, and $n$ is for the method $A_n$ obtained at the nth level of recursion $(m\ge n)$, and represent the general formula for the recursion as:

$\displaystyle A_{m,n}=\frac{4^n A_{m,n-1}-A_{m-1,n-1}}{4^n-1}$ (123)

This recursive process can be carried out in the following tabular form:

$\displaystyle \begin{tabular}{c\vert c\vert\vert c\vert c\vert c\vert c\vert c}...
...}$\ & $A_{4,1}$\ & $A_{4,2}$\ & $A_{4,3}$\ & $A_{4,4}$\ \\ \hline
\end{tabular}$ (124)

Here $m$ is the index of the vertical dimension representing the step sizes $h_m=h/2^m$, and $n$ is the index of the horizontal dimension representing the method obtained at the $n$ level of the recursion. The general element $A_{m,n}$ can be obtained by the recursion in Eq. (123) based on its left neighbor $A_{m,n-1}$ generated by previous method $A_{n-1}$ with the same step size $h_m$, and the top-left neighbor $A_{m-1,n-1}$ generated also by the previous method $A_{n-1}$ but using a step size $h_{m-1}=2h_m$. The error term of $A_{n,n}$ on the diagonal of the table is two orders higher than that of $A_{n-1,n-1}$, and $2n$ orders higher than that of the initial method $A_{0,0}$.