Each of a set of images of
rows and
columns and
pixels can be represented as a K-D vector by concatenating its
columns
(or its
rows), and the
images can be represented by a
array
with each column for one of the
images:
A KLT can be applied to either the column or row vectors of the data
array , depending on whether the column or row vectors are
treated as the realizations (samples) of a random vector.
Pre-multiplying on both sides of the eigenequation above we
get
Pre-multiplying on both sides we get
From these two cases we see that the eigenvalue problems associated with
these two different covariance matrices
and
are equivalent, in the sense that they
have the same non-zero eigenvalues, and their eigenvectors are related by
or
. We can therefore
solve the eigenvalue problem of either of the two covariance matrices,
depending on which has lower dimension.
Owing to the nature of the KLT, most of the energy/information contained
in the images, representing the variations among all
images, is
concentrated in the first few eigen-images corresponding to the greatest
eigenvalues, while the remaining eigen-images can be omitted without
losing much energy/information. This is the foundation for various
KLT-based image compression and feature extraction algorithms. The
subsequent operations such as image recognition and classification can
be carried out in a much lower dimensional space after the KLT.
In image recognition, the goal is typically to classify or recognize
objects of interest, such as hand-written alphanumeric characters, or
human faces, represented in an image of pixels. As not all
pixels
are necessary for representing the image object, the KLT can be carried
out to compact most of the information into a small number of components:
Example