Propagation of uncertainty

The reliability of the parameters ${\bf a}=[a_1,\cdots,a_M]^T$ obtained by any of the methods discussed above depends on the reliability of the data sets, both characterized by their variances.

We first assume ${\bf y}=[y_1,\cdots,y_M]^T$ is a linear combination of $N$ random variable ${\bf x}=[x_1,\cdots,x_N]^T$:

$\displaystyle {\bf y}_{M\times 1}={\bf A}_{M\times N}\,{\bf x}_{N\times 1}$ (67)

Given the mean vector and covariance matrix of ${\bf x}$

$\displaystyle {\bf m}_x=E{\bf x},\;\;\;\;\;\;\;
{\bf\Sigma}_x=E[({\bf x}-{\bf m}_x)({\bf x}-{\bf m}_x)^T]$ (68)

we can find the mean vector and covariance matrix of ${\bf y}$ to be

$\displaystyle {\bf m}_y=E{\bf y}=E{\bf A}{\bf x}={\bf A}E{\bf x}={\bf A}{\bf m}_x,$ (69)


$\displaystyle {\bf\Sigma}_y$ $\displaystyle =$ $\displaystyle E[({\bf y}-\mu_y)({\bf y}-\mu_y)^T]
=E[{\bf A}({\bf x}-{\bf m}_x)({\bf x}-{\bf m}_x)^T{\bf A}]$  
  $\displaystyle =$ $\displaystyle {\bf A}E[({\bf x}-{\bf m}_x)({\bf x}-{\bf m}_x)^T]{\bf A}
={\bf A}{\bf\Sigma}_x{\bf A}^T$  

In particular if $M=1$, we have

$\displaystyle y=\sum_{i=1}^N a_ix_i={\bf a}^T{\bf x}$ (70)

where ${\bf a}=[a_1,\cdots,a_N]^T$, and

$\displaystyle \sigma^2_y={\bf a}^T{\bf\Sigma}_x{\bf a}$ (71)

More specially if all variables $x_i$ are independent (and therefore uncorrelated), i.e., $\sigma_{ij}=0$ for all $i\ne j$ and ${\bf\Sigma}_x$ is diagonal, then

$\displaystyle \sigma^2_y={\bf a}^T{\bf\Sigma}_x{\bf a}=\sum_{i=1}^N a_i^2\sigma_x^2$ (72)

We next assume ${\bf y}={\bf f}({\bf x})$ is a non-linear function of $N$ random variables $x_i$:

$\displaystyle \left\{ \begin{array}{c}
y_1=f_1({\bf x})=f_1(x_1,\cdots,x_N)\\
...
...cdots \cdots \cdots \\
y_M=f_M({\bf x})=f_M(x_1,\cdots,x_N)\end{array} \right.$ (73)

By Taylor expansion, $f_j({\bf x})$ ( $j=1,\cdots,M$) can be approximated as a linear function of ${\bf x}$:

$\displaystyle y_j\approx f_j({\bf0})+\sum_{i=1}^N \frac{\partial f_j}{\partial x_i} x_i,
\;\;\;\;\;\;(j=1,\cdots,M)$ (74)

or in matrix form:

$\displaystyle {\bf y}\approx{\bf f}({\bf0})+{\bf J}{\bf x}$ (75)

where ${\bf J}$ is the Jacobian matrix with $\partial f_j/\partial x_i$ as its ijth component. As the first term on the right is not a function of the random variables ${\bf x}$, it does not contribute to the covariance matrix of ${\bf y}$, and we have

$\displaystyle {\bf\Sigma}_y={\bf J}{\bf\Sigma}_x{\bf J}^T$ (76)

Again, if $M=1$ and $x_i$ are independent, then ${\bf\Sigma}_x$ is diagonal and we have

$\displaystyle \sigma^2_y=\sum_{i=1}^N \left(\frac{\partial f}{\partial x_i}\right)^2\sigma_{x^2_i}$ (77)