The slope of the secant through
and
can be
approximated by
,
, or, more accurately, the
average of the two:
. Correspondingly, we
have the following three methods:
- Forward Euler's method:
This method uses the derivative
at the beginning of
the interval
to approximate the increment
:
![$\displaystyle \hat{y}(t+h)=y(t)+h\,y'(t)=y(t)+h\,f(t,y(t))$](img505.svg) |
(189) |
Comparing this method with the Taylor series expansion of
:
![$\displaystyle y(t+h)=y(t)+y'(t)\;h+\frac{y''(t)}{2}h^2+\frac{y'''(t)}{3!}h^3+\cdots$](img506.svg) |
(190) |
we see that this method has a second order truncation error
.
If the step size
is not small enough, this truncation error
will accumulate as the iterative process progresses, and
so
generated may deviate from the true solution
quickly in
a few steps. Consequently, the iteration may become divergent
, i.e., the method may not be stable.
Therefore Euler's method is useful only if the step size
is
sufficiently small, so that the error does not accumulate too
quickly.
- Backward Euler's method:
This method uses the derivative
at the end of the interval
to approximate the increment
:
![$\displaystyle \hat{y}(t+h)=y(t)+h\,y'(t+h) = y(t)+h\,f(t+h,y(t+h))$](img510.svg) |
(191) |
Replacing
in the expression by its Taylor expansion:
![$\displaystyle y'(t+h)=y'(t)+h y''(t)+\frac{h^2}{2}y'''(t)+O(h^3)$](img511.svg) |
(192) |
we get
![$\displaystyle \hat{y}(t+h) \approx y(t)+h\,y'(t+h)=y(t)+h y'(t)+h^2 y''(t)+O(h^3)$](img512.svg) |
(193) |
Comparing this with the Taylor expansion in Eq. (190),
we see that the backward Euler's method also has a second order error
. As the desired function value
appears on both
sides of Eq. (191), this method is implicit,
instead of explicit, as
needs to be found by solving
the following equation for
treated as the unknown:
![$\displaystyle F(x)=y(t)+h\,f(t+h, x)-x=0$](img514.svg) |
(194) |
In general, the Newton-Raphson method can be used to solve
this equation by the following iteration from some initial
guess
:
where![$\displaystyle \;\;\;\;\;\;
F'(x)=\frac{d}{dx}F(x)=h\,\frac{df(t+h, x)}{dx}-1$](img517.svg) |
(195) |
Obviously this method is more computationally expensive, however,
as an implicit method, it is always stable compared to the forward
method, as shown below.
- Trapezoidal method (Heun's method):
This method uses the average of the derivatives at the beginning
and end points of the interval
as an estimate of
the slope of the secant:
![$\displaystyle y'(t+ch)\approx \frac{y'(t)+y'(t+h)}{2}$](img518.svg) |
(196) |
so that the next function value can be approximated as
![$\displaystyle y(t+h)=y(t)+h\,y'(t+ch)\approx y(t)+h\,\frac{y'(t)+y'(t+h)}{2}$](img519.svg) |
|
|
|
![$\displaystyle =y(t)+\frac{h}{2}\,[f(t,\,y(t))+f(t+h,\,y(t+h))]$](img520.svg) |
|
|
(197) |
which can also be obtained by applying the trapezoidal
rule of the
Newton-Cotes quadrature
to approximate the integral in the iteration:
This can be expressed iteratively:
![$\displaystyle y_{n+1}=y_n+\frac{h}{2}[ f(t_n,\,y_n)+f(t_n+h,\,y_n+hf_n) ]
=y_n+h\left(\frac{1}{2}k_1+\frac{1}{2}k_2\right)$](img523.svg) |
(199) |
where
![$\displaystyle \left\{ \begin{array}{l}k_1=f(t_n,y_n)=f_n \\
k_2=f(t_n+h,\,y_n+hk_1)\end{array}\right.$](img524.svg) |
(200) |
To find the order of the truncation error, we replace
in the equation by its Taylor expansion
and rewrite the equation
above as
Comparing this with the Taylor expansion in Eq. (190),
we see that the trapezoidal method has a third order error
, one order higher than either the forward or backward
method.
Same as the backward method, this traperoidal method is also
implicit. To find
, we need to solve the following
equation:
![$\displaystyle F(x)=y(t)+\frac{h}{2}f(t,\,y(t))+\frac{h}{2} f(t+h,\,x)-x=0$](img531.svg) |
(202) |
In summary, here is how the three methods find the increment of
function
:
![$\displaystyle \Delta y
=h\,f'(t+ch) \approx h\,\left\{\begin{array}{ll}
y'(t) & O(h^2) \\ y'(t+h) & O(h^2) \\
(y'(t)+y'(t+h))/2 & O(h^3)
\end{array}\right.$](img532.svg) |
(203) |
which can be implemented iteratively from the known initial condition
![$\displaystyle y_{n+1}=y_n+h\,\left\{\begin{array}{l}
y'_n \\ y'_{n+1} \\ (y'_n+...
...n)\\ f(t_{n+1},y_{n+1})\\
(f(t_n,y_n)+f(t_{n+1},y_{n+1}))/2
\end{array}\right.$](img534.svg) |
(204) |
Example: Consider a simple first order constant coefficient DE:
i.e.![$\displaystyle \;\;\;\;\;\;
y'(t)=f(t,\,y(t))=-\lambda y(t)$](img536.svg) |
(205) |
with
and initial condition
. The closed
form solution of this equation is known to be
,
which decays exponentially to zero when
.
This DE can be solved numerically by each of the three methods.
- The forward Euler's method:
The iteration will converge to the solution, only if
i.e.,
or
. If the step size is greater
than
, iteration will diverge.
- The backward Euler's method:
![$\displaystyle y_{n+1}=y_n+h\,f(t_{n+1},y_{n+1})=y_n- \lambda h y_{n+1}$](img547.svg) |
(207) |
Solving the equation for
we get
![$\displaystyle y_{n+1}=\frac{y_n}{1+\lambda h}$](img549.svg) |
(208) |
- The trapezoidal method:
![$\displaystyle y_{n+1}=y_n+h\,\frac{f(t_n,y_n)+f(t_{n+1},y_{n+1})}{2}
=y_n- \lambda h\frac{y_n+y_{n+1}}{2}$](img550.svg) |
(209) |
Solving the equation for
we get
![$\displaystyle y_{n+1}=\frac{2-\lambda h}{2+\lambda h}\,y_n$](img551.svg) |
(210) |
The results of all three methods are shown together with the ground
truth solution
with
and
.
The four plots correspond to five different step sizes
.
We make the following observations:
- The solution of the forward method is always below the
true solution, while that of the backward method is always
above. This is because the negative slop of the solution,
whose magnitude is continuously reduced, is over estimated
by
used in the forward method, but under estimate by
used by the backward method. As the average of the
two, the result of the trapezoidal method coincide with the
true solution more closely.
- The backward method always produces a stable approximation
of the function
, while the performance of the forward
method is very sensitive to the step size
. In particular,
when
, the iteration becomes divergent.
In general there are two different types of approaches to
estimate some value in the future, such as the next function
value
:
- Explicit methods are based on the present and past
information, such as
and
. Such a method may
be divergent and therefore unstable (the iteration grows
out of bound);
- Implicit methods are based on some unknown information
in the future, such as
, as well as the present and
past information such as
and
. Such a method
is in general stable.
We further consider the Midpoint method which uses
the midpoint
between the two end point
and
. Consider the Taylor series expansion of
:
![$\displaystyle y(t+h)=y(t)+y'(t)h+\frac{h^2}{2!}y''(t)+\frac{h^3}{3!}y'''(t)+O(h^4)$](img561.svg) |
(211) |
which can be rewritten as the following:
![$\displaystyle \frac{y(t+h)-y(t)}{h}=y'(t)+\frac{h}{2}y''(t)+\frac{h^2}{6}y'''(t)+O(h^3)$](img562.svg) |
(212) |
On the other hand, the Taylor series for
is:
![$\displaystyle y'\left(t+\frac{h}{2}\right)=y'(t)+\frac{h}{2}y''(t)+\frac{h^2}{8}y'''(t)+O(h^3)$](img564.svg) |
(213) |
The difference between these two equations above is:
![$\displaystyle \frac{y(t+h)-y(t)}{h}-y'\left(t+\frac{h}{2}\right)=O(h^2)$](img565.svg) |
(214) |
Solving for
we get
![$\displaystyle y(t+h)=y(t)+h\, y'\left(t+\frac{h}{2}\right)+h\,O(h^2)
=y(t)+h f\left(t+\frac{h}{2},\,y\left(t+\frac{h}{2}\right)\right) +O(h^3)$](img566.svg) |
(215) |
The third order error term
is one order higher than that
of either Euler's forwar or backward method.
This equation can be expressd iteratively:
![$\displaystyle y_{n+1}=y_n+h f\left(t_n+\frac{h}{2},\,y\left(t_n+\frac{h}{2}\right)\right)$](img567.svg) |
(216) |
The second argument
of the function on the
right-hand side can be approximated by the first two terms of
its Taylor expansion:
![$\displaystyle y\left(t_n+\frac{h}{2}\right)\approx y(t_n)+\frac{h}{2}y'(t_n)
=y_n+\frac{h}{2}f(t_n,\,y_n)=y_n+\frac{h}{2}f_n$](img569.svg) |
(217) |
Substituting this back into the iteration above we get:
![$\displaystyle y_{n+1}=y_n+h f\left(t_n+\frac{h}{2},y_n+\frac{h}{2}f_n\right)
=y_n+h f\left(t_n+\frac{h}{2},\,y_n+\frac{h}{2}k_1 \right)
=y_n+hk_2$](img570.svg) |
(218) |
where
![$\displaystyle \left\{ \begin{array}{l}
k_1=f(t_n,\,y_n)=f_n\\
k_2=f(t_n+\frac{h}{2},y_n+\frac{h}{2}k_1)\end{array}\right.$](img571.svg) |
(219) |