The optimization problems subject to inequality constraints can
be generally formulated as:
|
(185) |
Again, to visualize the problem we first consider an example with
and , as shown in the figure below for the minimization
(left) and maximization (right) of
subject to
. The constrained solution
is on the boundary of the feasible region satisfying
,
while the unconstrained extremum is outside the feasible region.
Consider the following two possible cases.
- First, if the unconstrained extremum at which
is outside the feasible
region, i.e., the inequality constraint is active, then the
constrained solution must be
- on the boundary of the feasible region, i.e.,
,
- different from the unconstrained solution, i.e.,
;
The problem becomes the same as an equality constrained problem
considered before, which requires the objective function
and the constraining function
to have the same tangent
at , or parallel gradients:
or |
(186) |
However, different from before, now we are also concerned with whether
the two gradients have the same or opposite directions, corresponding to
or , respectively, as illustrated in the 1-D examples in
the figure below:
Depending on whether
is to be maximized or minimized,
and whether the constraint is
or
,
there exist four possible cases in terms of the sign of , as
summarized in the table below:
|
(187) |
|
(188) |
We can also consider the product of and
for either maximization or minimization:
|
(189) |
- Second, if the unconstrained extremum is inside the feasible
region, i.e., the inequality constraint is inactive, then the
problem is actually unconstrained and the results above are no
longer valid. The solution of this unconstrained
problem is
- the same as the unconstrained solution, i.e.,
;
- not on the boundary of the feasible region, i.e.,
;
Now we need to assume and find the solution solving the
same equation
Summarizing the two cases above, we see that
but
in the first case,
but
in the second case, i.e., the following holds in either case:
|
(190) |
The discussion above can be generalized from 2-D to dimensional
space, in which the optimal solution is to be found
to extremize the objective
subject to inequality
constraints
. To solve this inequality
constrained optimization problem, we first construct the Lagrangian:
|
(191) |
We note that in some literatures, a plus sign is used in front of the
summation of the second term. This is equivalent to our discussion
here so long as the sign of indicated in Table 188
is negated.
We now set the gradient of the Lagrangian to zero:
|
(192) |
and get two equation systems of and equations,
respectively:
|
(193) |
and
|
(194) |
The result above for the inequality constrained problems is the same
as that for the equality constrained problems considered before. However,
we note that there is an additional requirement regarding the sign of the
scaling coifficients. For an equality constrained problem, the direction
of the gradient
is of no concern, i.e., the
sign of is unrestricted; but here for an inequality constrained
problem, the sign of needs to be consistent with those shown in
Table 188, other wise the constraints may be inactive.
We now consider the general optimization of an N-D objective function
subject to multiple constraints of both equalities and
inequalities:
|
(195) |
For notational convenience, we represent the equality and
inequality constraints in vector form as:
|
(196) |
where
and
.
To solve this optimization problem, we first construct the Lagrangian
|
(197) |
where the Lagrange multipliers in
and
are for the equality and
non-negative constraints, respectively, and then set its gradient with
respect to both
and as well as to
zero. The solution can then be obtained by solving the
resulting equation system. While can be either positive
or negative, with sign of needs to be consistent with those
specified in Table 188. Otherwise the inequality
constraints is inactive.
Example:
Find the extremum of
subject to each of
the three different constraints: ,
, and
.
-
The Lagrangian is:
Solving the following equations
we get
, i.e., the function
is minimized at
. We note that
and
have the same gradients:
- Consider these two problems:
These problems are cases 2 and 3 in Table 188.
They have the same Lagrangian as in the previous case and
therefore the same results
and .
According to Table 188, is
the solution for the minimization problem subject to
(left), or the maximization problem
subject to
.
- Consider these two problems:
These problems are cases 1 and 4 in Table 188.
They have the same Lagrangian as in the previous case and
therefore the same results
, and .
According to Table 188, is
not the solution of either the maximization problem
subject to
(left), or the minimization
problem subject to
(right), i.e., the
constraint is inactive. We therefore need to assume
and solve the following equations for an unconstrained problem:
Now we get
and
, with
minimum
. This is the solution for the
minimization problem (right), at which
,
i.e., the constraint is inactive. However this is not
the solution for the maximization problem (left) as the
function is not bounded from above.