An optimization problem becomes more complicated if it is
constrained, i.e., the arguments #tex2html_wrap_inline8272# of the objective
function #tex2html_wrap_inline8274# are subject to (abbreviated as s.t. in
the following) certain equality and/or inequality constraints
in terms of what values they can take. Such a generic constrained
optimization problem can be formulated as:
#math748# #tex2html_wrap_indisplay19729#
(151)
The set composed of all values of #tex2html_wrap_inline8276# that satisfy the #tex2html_wrap_inline8278#
constraints is called the <#3625#>feasible region<#3625#> of the problem. The
goal is to find a point #tex2html_wrap_inline8280# in the feasible region at which
#tex2html_wrap_inline8282# is an extremum.
In the following we will first consider the most general case
where the objective function #tex2html_wrap_inline8284# is nonlinear and so
are the constraint functions #tex2html_wrap_inline8286# and #tex2html_wrap_inline8288#.
The processing of solving such a problem is called the
<#3631#>nonlinear programming (NLP)<#3631#>. Here <#3632#>programming<#3632#> means
the mathematical precedure for solving the optimization problems.
We will consider the general method of Lagrange multiplier and a
set of conditions all solutions of such problems must satisfy.
We will then consider some specific algorithms for two special cases:
<#3633#>linear programming (LP)<#3633#> in Section #LinearProgramming#3634>
for problems where the objective function and all constraint
functions are linear (the feasible region is a polytope), and then
<#3635#>quadratic programming (QP)<#3635#> in Section #quadraticProgram#3636>
for problems where the objective function is quadratic while the
constraints are linear.