The
Jacobi iteration
is an iterative method for solving linear equation system
, if satisfies a certain
condition. Specifically, we first decompose in the
following way:
|
(206) |
where
is a diagonal matrix
containing all diagonal elements of with all off-diagonal
elements being zero, and
contains all
off-diagonal elements of with all diagonal elements
being zero. Now the given system can be expressed as
i.e. |
(207) |
Premultiplying both side by
,
we get
|
(208) |
where we have defined
|
(209) |
This equation can be turned into an iteration:
|
(210) |
In component form we have
|
(211) |
or
|
(212) |
We now show that this iteration will converge to the solution of
the equation system if satisfies certain condition. We
first write as the sum of the true solution
and an error term:
|
(213) |
then we have
|
(214) |
where we have used the identity
given
above, and
is the error after the (n+1)th
iteration, which can be further written as
|
(215) |
Obviously if the error converges to zero:
|
(216) |
Now we further consider the condition under which the error does
indeed converge to zero. Based on the eigen-equation of :
|
(217) |
we further have
|
(218) |
As in general the eigenvectors are independent, they form a basis
(not necessarily orthogonal) of the N-D space by which any vector, such
as , can be written as a linear combination of these vectors:
|
(219) |
Now we have
|
(220) |
If the
spectral radius
(the eigenvalue with maximum absolute value)
is
smaller than 1, then
|
(221) |
and the error converge to zero:
|
(222) |
A sufficient (but not necessary) condition for
is that
is strictly diagonally dominant, i.e.,
|
(223) |
The iteration may converge even if this condition is not satisfied.
The coefficient matrix can be alternatively decomposed
into
, where
-
is a diagonal matrix
containing all diagonal elements of , same as before;
- is a lower triangular matrix containing all components
of below the diagonal;
- is an upper triangular matrix containing all components
of above the diagonal.
|
(224) |
Now the equation system can be written as
i.e., |
(225) |
Premultiplying
on both sides, we get
|
(226) |
where
contains all the off-diagonal elements
of , same as defined before. This equation is then converted
into an iteration
|
(227) |
or in component form:
|
(228) |
Note that the first expression is for the Gauss-Seidel iteration, while
the second is for Jacobi iteration. However, there is an essential
difference between the two methods. In the Jacobi iteration, all
unknowns
are updated simultaneously (in parallel) from
. But in Gauss-Seidel iteration, they are updated differently
for
() and
(). Specifically,
when computing
for
, all () in
the summation are already updated, i.e., we can therefore use the most
updated
(instead of ) in the iteration:
|
(229) |
It can be shown
that if matrix is
strictly diagonally dominant
then the Gauss-Seidel method converges.
The
successive over relaxation (SOR)
is a method that can be used
to speed up the convergence of the iteration. Multiplying a parameter
on both sides of the equation
, we get
|
(230) |
which can be written as
|
(231) |
Premultiplying
we get
|
(232) |
which can be converted into an iteration:
|
(233) |
The right-hand side can be considered as the weighted average of two
terms: the estimate from the previous iteration in the first
term and the updated estimate in the second term. By adjusting the
parameter , we can control the rate of convergence. When
, this becomes the same as the Gauss-Seidel iteration. But
an will make the second term more dominant and speed up an
slow converging iteration, while an will make the first term
more dominant thereby helping establish convergence of an otherwise
diverging (e.g., overshooting) iteration.
Homework 4:
- Write a function to implement the QR decomposition by Householder
transformation.
- Use the QR function developed above to solve the following system:
|
(234) |
- Write a function to implement the Jacobi iteration method, and
then solve the same system above, using
as the
initial guess. Print the error
after each
iteration. Terminate the iteration when .
- Write a function to implement the Gauss-Seidel iteration method, and
then solve the same system above, using
as the
initial guess. Print the error
after each
iteration. Terminate the iteration when . Compare the number of
iterations of both Jacobi and Gauss-Seidel methods.
- Implement SOR in the Gauss-Seidel method and experiment with different
values of to see its effects, and identify a value that minimizes
the number of iterations.