The Power Methods for Eigenvalue problems

In some applications, only the eigenvalue $\lambda_{max}$ or $\lambda_{min}$ of maximum or minimum absolution values (if real) or moduli (if commplex) is of interes (e.g., projecting data points in a high-dimensional space into a single dimension in the pricipal component direction). In such cases, we may be able to use the methods of power iteration to find $\lambda_{max}$ and $\lambda_{min}$ together with their corresponding eigenvectors, based on the assumption that ${\bf A}$ is a full-rank square matrix. In the following, we assume the eigenvalues are arranged in descending order based on their absolution values or modulus:

$\displaystyle \vert\lambda_1\vert=\vert\lambda_{max}\vert>\vert\lambda_2\vert\ge\cdots\ge\vert\lambda_{N-1}\vert
>\vert\lambda_N\vert=\vert\lambda_{min}\vert$ (39)

We make a few observations about the power method.

For example, if the eigenvalues of a $4\times 4$ real matrix are $\{-9,\;1,\;3\pm j4\}$, then the power iteration methods above will find both $\lambda_{max}=-9$ and $\lambda_{min}=1$. But if the eigenvalues are $\{-2,\;1,\;3\pm j4\}$, then the power method for $\lambda_{max}$ will not converge because the modulus of the complex conjugate pair $\vert 3\pm j4\vert=5$ is the maximum of all eigenvalues. Similarly if the eigenvalues are $\{-9,\;6,\;3\pm j4\}$, then the inverse iteration method for $\lambda_{min}$ will not converge because $\vert 3\pm j4\vert=5$ is the minimum of all eigenvalues.

Example:

The eigenvalues of the following matrix

$\displaystyle {\bf A}=\left[\begin{array}{rrrrr}
3.7420 & -0.4750 & 2.5242 & -...
... -1.1276\\
1.6295 & -0.6236 & 2.9369 & -2.0581 & -0.3322
\end{array}\right]
$

are $3,\;3,\;3,\;2,\;1$, and the corresponding eigenvectors are:

$\displaystyle {\bf V}=[{\bf v}_1,\cdots,{\bf v}_5]
=\left[\begin{array}{rrrrr}...
...& -0.5262\\
-0.5063 & 0.6923 & 0.5607 & -0.5165 & 0.3986
\end{array}\right]
$

The maximum eigenvalue found by the power method is $\lambda_1=3$, and the corresponding eigenvector ${\bf u}_1$ is a linear combination of the three eigenvectors of this repeated eigenvalue:

$\displaystyle {\bf u}_1
=\left[\begin{array}{r}-0.4898\\ 0.1224\\ -0.5665\\ -0...
...\\ -0.5858\end{array}\right]
=0.5923{\bf v}_1-0.4796{\bf v}_2-0.0275{\bf v}_3
$

The Matlab code for the power method is listed below:

% The power algorithm to find the largest and smallest eigenvalues and
% eigenvectors of a given matrix A. So, given an estimate mu of any of the
% eigenvalues, find the eigenvalue closest to mu and its eigenvector.

rng(2)
clear all
n=4;
a=3; b=4; 

A=eye(n);
A(1,1)=a; A(2,2)=a;
A(1,2)=b; A(2,1)=-b;
A(3,3)=8; A(4,4)=-2;
A
[v d]=eig(A)

V=rand(n);
A=V*A*inv(V);
A
[v d]=eig(A)
diag(d)'
abs(diag(d))
pause

u=rand(n,1);   u=u/norm(u);
u1=u;
u2=u;
u3=u;
mu=2.9+j*4.1
done=0;
k=0;
s1=1; s2=1; s3=1;
while ~done & k<99
    k=k+1;
    v1=u1;  v2=u2;  v3=u3;  
    r1=s1;  r2=s2;  r3=s3;
    w=A*u1;  u1=w/norm(w);  
    ev1=norm(v1-u1);
    s1=u1'*A*u1;  es1=abs(s1-r1);
    w=linsolve(A,u2); w=inv(A)*u2;  u2=w/norm(w);  
    ev2=norm(v2-u2);
    s2=u2'*A*u2;  es2=abs(s2-r2);
    w=linsolve(A-mu*eye(n),u3);  w=inv(A-mu*eye(n))*u3;   u3=w/norm(w); 
    ev3=norm(v3-u3);
    s3=u3'*A*u3;  es3=abs(s3-r3);
    fprintf('%d  (%e %e %e),  (%e %e %e)\n',k,ev1,ev2,ev3,es1,es2,es3)
%    if ev1+ev2+ev3<10^(-6)
    if es1+es2+es3 < 10^(-6)
        done=1;
    end
end
k
u1';  u1'*A*u1
u2';  u2'*A*u2
u3';  u3'*A*u3