next up previous
Next: Covariance and Correlation Up: klt Previous: klt

Multivariate Random Signals

Before reading on, it is highly recommended that you review the basics of multivariate probability theory

A real time signal $x(t)$ can be considered as a random process and its samples $x_n\;\;(n=1, \cdots, N)$ a random vector is the expectation of ${\bf x}$:

\begin{displaymath}{\bf x}=[ x_1, \cdots, x_{N} ]^T \end{displaymath}

The mean vector of ${\bf x}$ is

\begin{displaymath}
{\bf m}_x\stackrel{\triangle}{=}E({\bf x})=[E(x_1), \cdots, E(x_{N}) ]^T
=[\mu_1,\cdots, \mu_{N} ]^T
\end{displaymath}

where $\mu_n=E(x_i)$. The covariance matrix of ${\bf x}$ is

\begin{displaymath}
{\bf\Sigma}_x\stackrel{\triangle}{=}
E[ ({\bf x}-{\bf m}_x...
...^2 & \vdots \\
\cdots & \cdots & \ddots \end{array} \right]
\end{displaymath}

where $\sigma_{ij}^2\stackrel{\triangle}{=}E(x_ix_j^*)-\mu_i\mu_j^*$ is the covariance of two random variables $x_i$ and $x_j$. When $i=j$, $\sigma_{ij}^2$ becomes the variance of $x_i$

\begin{displaymath}
\sigma_i^2\stackrel{\triangle}{=}E\vert x_i-\mu_i\vert^2=E\vert x_i\vert^2-\vert\mu_i\vert^2
\end{displaymath}

The correlation matrix of $X$ is defined as

\begin{displaymath}
{\bf R}_x\stackrel{\triangle}{=}E({\bf x}{\bf x}^{*T})
=\l...
...j} & \vdots \\
\cdots & \cdots & \ddots \end{array} \right]
\end{displaymath}

where $r_{ij}=E(x_ix_j^*)=\sigma_{ij}^2+\mu_i\mu_j^*$.

In general, if the data set ${\bf x}$ is complex, both the covariance and the correlation matrices are Hermitian, i.e.,

\begin{displaymath}
{\bf\Sigma}_x^{*T}={\bf\Sigma}_x,\;\;\;\;\;\;\; {\bf R}_x^{*T}={\bf R}_x
\end{displaymath}

In particular if the data set is real, then both ${\bf\Sigma}_x^T={\bf\Sigma}_x$ and ${\bf R}_x^T={\bf R}_x$ are real and symmetric ${\bf\Sigma}_x^T={\bf\Sigma}_x$.

A signal vector ${\bf x}$ can always be easily converted into a zero-mean vector ${\bf x}'={\bf x}-{\bf m}_x$ with all of its dynamic energy (representing the information contained) conserved. Without loss of generality for convenience, sometimes we can assume ${\bf m}_x={\bf0}$ so that ${\bf\Sigma}_x={\bf R}_x$.

After a certain orthogonal transform of a given random vector ${\bf x}$, the resulting vector ${\bf y}={\bf A}^T{\bf x}$ is still random with the following mean and covariance:

\begin{displaymath}{\bf m}_y = E({\bf y})=E({\bf A}^T {\bf x})={\bf A}^T E({\bf x})
={\bf A}^T {\bf m}_x \end{displaymath}


$\displaystyle {\bf\Sigma}_y$ $\textstyle =$ $\displaystyle E({\bf yy}^{T})-{\bf m}_y {\bf m}_y^T
=E[({\bf A}^{T}{\bf x})({\bf A}^{T}{\bf x})^{T}]
-({\bf A}^T {\bf m}_x) ({\bf A}^T {\bf m}_x)^T$  
  $\textstyle =$ $\displaystyle E[{\bf A}^{T}({\bf xx}^{T}){\bf A}]-{\bf A}^T {\bf m}_x {\bf m}_x...
...({\bf xx}^{T})-{\bf m}_x {\bf m}_x^T ] {\bf A}
={\bf A}^{T}{\bf\Sigma}_x{\bf A}$  


next up previous
Next: Covariance and Correlation Up: klt Previous: klt
Ruye Wang 2016-04-06