Previously we consider the constrained minimization problem:
|
(198) |
with its Lagrangian:
|
(199) |
As this is a minimization problem,
as indicated in Table 188.
This problem is called a primal problem, as there also
exists an equivalent dual problem, which may be easier to
solve. We first define the dual function as the infimum
(lower bound) of the Lagrangian to be minimized:
|
(200) |
The optimal that minimizes this Lagrangian can be
found by solving the equation
.
Moreover, to find the tightest lower bound, the supremum (upper
bound) of the dual function
over
its variables
and , we solve the
following dual problem of the original primal problem:
|
(201) |
Here
for the equality constrains of the primal
problem can be either positive or negative, i.e., it is not
constrained in the dual problem. However, the sign of for
the inequalities constraint of the primal minimization problem
needs such so that
according
to Eq. (189), i.e., is subject to the
inequality constraint of the dual problem. As a typical example,
the algorithm of support vector machine
in machine learning is a minimization
problem, based on a set of the given data
:
|
(202) |
the solution of which is actually obtained by solving its dual problem:
|
(203) |
We further note that the following holds for any
in the feasible regions defined by
and
and any
and in its feasible region:
|
(204) |
This inequality is satisfied by any and
, including the solution
and
. We denote the minimum of the
objective function of the primal problem by
,
and the maximum of the objective function of the dual problem by
, then we have:
|
(205) |
Similarly, we can also show that a primal maximization problem with
solution that satisfies
has
a dual minimization problem with solution
that satisfies
|
(206) |
We therefore conclude that the solution of the dual problem always
provides the tightest (lower or upper) bound for the solution of
the primal (minimization or maximization) problem. The difference
for the minimization primal problem or
for the maximization primal problem is called the duality gap.
If the duality gap is non-zero, i.e.,
, the dual
relationship is a weak duality, otherwise it is a
strong duality, for which the following holds:
which further implies all following equations must hold:
|
(208) |
These are called the Karush-Kuhn-Tucker (KKT) conditions, the
necessary conditions for the solution of a constrained optimization
problem. In other words, for any to be a solution, it must
satisfy all of the following:
- Stationarity: maximizes or minimizes the Lagrangian,
i.e.,
;
- Primal feasibility: must be inside the feasible region;
- Dual feasibility: all multipliers for the inequalities
must take the proper sign as specified by Table 188;
- Complementarity: one of and
must be
zero for the following two possible cases:
- the jth constraint is inactive,
, ;
- the jth constraint is active if
,
.