The following probability density functions (pdf) are commonly
used in statistical hypothesis tests:
- Normal (Gaussian) distribution:

(
1)
if
has a normal distribution denoted by
, then
, and

(
2)
has the standard normal distribution with
and
.
For example, if
are independent and identically
distributed (i.i.d.) samples of
,
then

(
3)
- Chi-squared
-distribution:

(
4)
where
is the degrees of freedom, and
is the gamma function.
For example, if
are i.i.d. samples of
, then

(
5)
If all
have the normal distribution with
the same variance
, i.e.,
, then

(
6)
- Student's t-distribution (William Sealy Gosset):

(
7)
Specially

(
8)
If
and
, then

(
9)
For example, if
are i.i.d. samples of
, and
,
then

(
10)
has a Chi-squared distribution with
degrees of freedom and

(
11)
has a t-distribution with with
degrees of freedom.
The t-distribution approaches the normal distribution when
. In general, if
, the t-distribution
is close enough to the standard normal distribution.
- F-distribution (after Ronald Fisher):

(
12)
where
are degrees of freedom, and
is
Beta function.
If
and
are independent and have Chi-squared
distribution:

(
13)
then their ratio has an F-distribution:

(
14)
For example, if

(
15)
then

(
16)
and we still have

(
17)
Assume
is a random variable having
a normal probability density function (pdf) with mean
and variance
, both of which are unknown. We can estimate
the value of
by the sample mean of a set of
i.i.d
samples
drawn from this distribution:

(
18)
This is also a normally distributed random variable with the same
mean
as
but a different variance
. The standard
deviation
, called the standard error and
denoted by SE, can be considered as the variability or noise.
Specially when
, SE is the same as the standard deviation
of the original distribution, but it decrease when
increases, and it approaches to zero when
approaches to infinity.
We can further define another random variable
by shifting
by its mean
and scaling by its standard error
,
so that
has a standard normal distribution of zero mean and unit
variance:

(
19)
Given
, called test statistic, we can carry out various
statistic tests called Z-test.
If the variance
is unknown, it can be estimated by the
sample variance:

(
20)
This is an unbiased estimation of the variance, based on which we can further find the unbiased
estimated standard error (ESE)
.
Now the random variable
based on
can be replaced by
another random variable
based on
:

(
21)
Different from
with a normal distribution,
, as a test
statistic, has a t-distribution with
degrees of freedom.
Given
as a test statistic, we can carry out various statistic
tests called Student's t-test.