Each outcome of a random experiment may need to be described by a set of
random variables
, or in vector form:
The joint distribution function of a random vector is defined
as
![]() |
![]() |
![]() |
|
![]() |
![]() |
For convenience, let us first consider two of the variables and rename them
as
and
. These two variables are independent iff
![]() |
![]() |
![]() |
|
![]() |
![]() |
Similarly, a set of variables are independent iff
The expectation or mean of random variable is defined as
The mean vector of random vector is defined as
The variance of random variable is defined as
![]() |
![]() |
![]() |
|
![]() |
![]() |
The covariance of and
is defined as
![]() |
![]() |
![]() |
|
![]() |
![]() |
The covariance matrix of a random vector is defined as
is symmetric as
. Moreover, it can be
shown that
is also positive definite, i.e., all its eigenvalues
are greater than zero
for all
, and we have
Two variables and
are uncorrelated iff
,
i.e.,
If this is true for all , then
is called uncorrelated
or decorrelated and its covariance matrix
becomes a
diagonal matrix with only non-zero
on its
diagonal.
If
are independent,
, then it is easy to show that
they are also uncorrelated. However, uncorrelated variables are not necessarily
independent. (But uncorrelated variables with normal distribution are also
independent.)
The autocorrelation matrix of is defined as
Two variable and
are orthogonal iff
.
Zero mean random variables which are uncorrelated are also orthogonal.
A unitary (orthogonal) transform of is defined as
The mean vector and the covariance matrix
of
are related to the
and
of
as shown below:
![]() |
![]() |
![]() |
|
![]() |
![]() |
Unitary transform does not change the trace of :
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
The density function of a normally distributed random vector is:
To find the shape of a normal distribution, consider the iso-value hyper
surface in the N-dimensional space determined by equation
![]() |
|||
![]() |
![]() |
||
![]() |
![]() |
The above quadratic equation represents an ellipse (instead of any other
quadratic curve) centered at
, because
,
as well as
, is positive definite:
Recall in general that the discriminant
of a quadratic equation
When , the equation
represents a
hyper ellipsoid in the N-dimensional space. The center and spatial distribution
of this ellipsoid are determined by
and
, respectively.
In particular, when
is decorrelated, i.e.,
,
becomes a diagonal matrix
When
is not known,
and
cannot be
found by their definitions. However, they can be estimated if a large number
of outcomes (
) of the random experiment in question
can be observed.
The mean vector can be estimated as
The autocorrelation can be estimated as
And the covariance matrix can be estimated as