next up previous
Next: Maximization of Entropy Up: Information Theory Previous: Measurement of uncertainty and

Entropy, uncertainty of random experiment

A random experiment may have binary (e.g., rain or dry) or multiple outcomes. For example, a dice has six possible outcomes with equal probability, or a pixel in a digital image takes one of the $2^8=256$ gray levels (0  255) with not necessarily the same probability. In general, these multiple outcomes can be considered as $N$ events $E_i$ with corresponding probability $P_i=P(E_i)$ ($i=1,\cdots,N$), which are

The uncertainty about the outcome of such a random experiment is the sum of the uncertainty $H(E_i)$ associated with each individual event $E_i$, weighted by its probability $P_i$:

\begin{displaymath}H(E_1, \cdots, E_N)\stackrel{\triangle}{=}\sum_{i=1}^N P_i\; H(E_i)
=-\sum_{i=1}^N P_i \;log\; P_i
\end{displaymath}

This is called the entropy which measures the uncertainty of the random experiment. Once the result of the experiment is known, the uncertainty becomes zero, i.e., the entropy is also the information associated with the experiment. The specific logarithmic base is not essential. If the base is 2, the unit of entropy is the bit; if the base is $e=2.71828$, the unit is nat (or nit).

For example, the weather can have two complementary and mutually exclusive possible outcomes: rain $E_1$ with probability $P_1$ or dry $E_2$ with probability $P_2=1-P_1$. The uncertainty of the weather is therefore the sum of the uncertainty of a rainy weather and the uncertainty of a fine weather weighted by their probabilities:

$\displaystyle H(E_1, E_2)$ $\textstyle =$ $\displaystyle P_1\; H(E_1)+P_2\; H(E_2)=-P_1 \;log\; P_1-P_2 \;log\; P_2$  
  $\textstyle =$ $\displaystyle -P_1\; log \;P_1-(1-P_1)\; log \;(1-P_1)$  

In particular, consider cases:


next up previous
Next: Maximization of Entropy Up: Information Theory Previous: Measurement of uncertainty and
Ruye Wang 2021-03-28