Constrained Optimization

An optimization problem is more complicated if it is constrained, i.e., the arguments ${\bf x}=[x_1,\cdots,x_N]^T$ of the objective function $f({\bf x})$ are subject to certain equality or inequality constraints in terms of what values they can take. Such a constrained optimization problem can be formulated as:

$\displaystyle \left\{\begin{tabular}{ll}
optimize & $f({\bf x})=f(x_1,\cdots,x_...
...\bf x})\le 0,\;\;\;\;\;(j=1,\cdots n)
\end{array} \right.$
\end{tabular}\right.$ (172)

The set of all values of ${\bf x}$ satisfying the $m+n$ constraints is called the feasible region of the problem. The goal is to find a point ${\bf x}^*$ in the feasible region at which $f({\bf x}^*)$ is an extremum.

In the most general case where the objective function $f({\bf x})$ and the constraint functions $g_i({\bf x})$ and $h_j({\bf x})$ are nonlinear, the process of solving this nonlinear optimization problem is called nonlinear programming (NLP). Specially, if the objective function is quadratic while the constraints are linear, (the feasible region is a polytope), the process is called quadratic programming (QP). More specially, if the objective function is linear as well as the constraints, the process is called linear programming (LP). In the following, we will first consider the NLP in general, and then discuss more specifically QP and LP.



Subsections