The problem considered previously can be generalized to the modeling of a
linear combination of
functions
(
) of
.
 |
(50) |
We want to find the
unknown parameters
. Examples include:
 |
(51) |
with
, and
 |
(52) |
with
.
Given a set of
data points
we get
equations:
 |
(53) |
which can be written in vector form as
 |
(54) |
where
![$\displaystyle {\bf y}=\left[\begin{array}{c}y_1\\ \vdots\\ \vdots\\ y_N\end{arr...
...\left[\begin{array}{c}e_1\\ \vdots\\ \vdots\\ e_N\end{array}\right]_{N\times 1}$](img154.svg) |
(55) |
and
![$\displaystyle {\bf B}=\left[\begin{array}{ccc}b_{11}&\cdots&b_{1M}\\ \vdots&\dd...
... \vdots&\ddots&\vdots\\
f_1(x_N)&\cdots&f_M(x_N)\end{array}\right]_{N\times M}$](img155.svg) |
(56) |
where
 |
(57) |
Typically
, i.e., the problem is over-constrained with
equations
but only
unknowns. The LS-error is
 |
(58) |
To find the
optimal parameters
that will
minimize
, we set the derivative of the LS-error with
respective to
to zero and get
From this we can get the normal equation
 |
(59) |
This linear equation system with a symmetric and positive-definite
coefficient matrix
can be solved by the
conjugategradient (CG) method
in
iterations.
Alternatively, we can solve the normal equation for
to get
 |
(60) |
where
is the pseudo-inverse of a non-square matrix
defined as
 |
(61) |
The pseudo-inverse of an
matrix
can also be found based on the
singular value decomposition (SVD) of
 |
(62) |
where
and
are both orthogonal
and
) composed of the orthogonal
eigenvectors of the symmetric matrix
and
,
respectively, and
is a diagonal matrix composed of the
common eigenvalues of both
and
, i.e.,
 |
(63) |
The pseudo-inverse of
is
 |
(64) |
which is a left-inverse
 |
(65) |
But note that it is not a right inverse
 |
(66) |
as the rank of the
by
matrix
is at most
.
Along its diagonal there are only
1's but
0's, i.e.,
.