Support Vector Machine

For a decision hyper-plane ${\bf x}^T {\bf w}+b=0$ to separate the two classes P $({\bf x}_i,1)$ and N $({\bf x}_i,-1)$, it has to satisfy

$\displaystyle y_i ({\bf x}_i^T{\bf w}+b) \ge 0
$
for both ${\bf x}_i \in P$ and ${\bf x}_i \in N$. Among all such planes satisfying this condition, we want to find the optimal one that separates the two classes with the maximal margin (the distance between the decision plane and the closest sample points).

SVMexample.png

The optimal plane should be in the middle of the two classes, so that the distance from the plane to the closet point on either side is the same. We define two additional planes $H_+$ and $H_-$ that are parallel to $H_0$ and go through the point closet to the plane on either side:

$\displaystyle {\bf x}^T {\bf w}+b=1,\;\;\;\;\mbox{and}\;\;\;\;{\bf x}^T {\bf w}+b=-1 $
All points ${\bf x}_i \in P$ on the positive side should satisfy
$\displaystyle {\bf x}_i^T {\bf w}+b \ge 1,\;\;\;\;y_i=1
$
and all points ${\bf x}_i \in N$ on the negative side should satisfy
$\displaystyle {\bf x}_i^T {\bf w}+b \le -1,\;\;\;\; y_i=-1
$
These can be combined into one inequality:
$\displaystyle y_i ({\bf x}_i^T{\bf w} +b) \ge 1,\;\;\;(i=1,\cdots,m)
$
The equality holds for those points on the planes $H_+$ or $H_-$. Such points are called support vectors, for which
$\displaystyle {\bf x}_i^T {\bf w}+b = y_i
$
i.e., the following holds for all support vectors:
$\displaystyle b=y_i-{\bf x}_i^T {\bf w}=y_i-\sum_{j=1}^m \alpha_j y_j ({\bf x}_i^T {\bf x}_j)
$

Moreover, the distances from the origin to the three planes $H_-$, $H_0$ and $H_+$ are, respectively, $\vert b-1\vert/\vert\vert{\bf w}\vert\vert$, $\vert b\vert/\vert\vert{\bf w}\vert\vert$, and $\vert b+1\vert/\vert\vert{\bf w}\vert\vert$, and the distances between planes $H_-$ and $H_+$ is $2/\vert\vert{\bf w}\vert\vert$, which is to be maximized. Now the problem of finding the optimal decision plane in terms of ${\bf w}$ and $b$ can be formulated as:

    $\displaystyle \mbox{minimize}\;\;\;\frac{1}{2}\vert\vert{\bf w}\vert\vert^2$  
    $\displaystyle \mbox{subject to}\;\;\;y_i ({\bf x}_i^T {\bf w}+b) \ge 1,\;\;\;\;(i=1,\cdots,m)$  
Since the objective function is quadratic, this constrained optimization problem is called a quadratic program (QP) problem. (If the objective function is linear, the problem is a linear program (LP) problem). This QP problem can be solved by Lagrange multipliers method to minimize
$\displaystyle L_p({\bf w},b,{\bf a})=\frac{1}{2}\vert\vert{\bf w}\vert\vert^2-\sum_{i=1}^m \alpha_i(y_i({\bf x}_i^T{\bf w}+b)-1)
$
with respect to ${\bf w}$, $b$ and the Lagrange coefficients $\alpha_i,\;\;(i=1,
\cdots,\alpha_m)$. We let
$\displaystyle \frac{\partial}{\partial {\bf w}}L_p({\bf w},b)=0,\;\;\;
\frac{\partial}{\partial b}L_p({\bf w},b)=0
$
These lead, respectively, to
$\displaystyle {\bf w}=\sum_{j=1}^m \alpha_j y_j {\bf x}_j,\;\;\;\mbox{and}\;\;\;\;
\sum_{i=1}^m \alpha_i y_i=0
$
Substituting these two equations back into the expression of $L({\bf w},b)$, we get a simplified dual problem with respect to $\alpha_i$ of the above primal problem:
$\displaystyle \mbox{maximize}$   $\displaystyle L_d({\bf a})
=\sum_{i=0}^m\alpha_i -\frac{1}{2}
\sum_{i=1}^m \sum_{j=1}^m \alpha_i \alpha_j y_i y_j ({\bf x}_i^T{\bf x}_j)$  
$\displaystyle \mbox{subject to:}$   $\displaystyle \alpha_i\ge 0,\;\;\;\;\sum_{i=0}^m \alpha_i y_i=0$  
Here the objective function is only a function of the dot product ${\bf x}_i^T{\bf x}_j$. Once this quadratic programming problem is solved for $\alpha_1,\cdots,\alpha_m$, we can find ${\bf w}$.

The dual problem is related to the primal problem by:

$\displaystyle L_d({\bf a})=\inf_{({\bf w},b)} L_p({\bf w},b,{\bf a})
$
As $L_d$ is the greatest lower bound (infimum) of $L_p$ for all ${\bf w}$ and $b$, we want it to be maximized.

Solving this dual problem (an easier problem than the primal one), we get $\alpha_i$, from which ${\bf w}$ of the optimal plane can be found.

Those points ${\bf x}_i$ on either of the two planes $H_+$ and $H_-$ (for which the equality $y_i({\bf w}^T {\bf x}_i+b)=1$ holds) are called the support vectors and they correspond to positive Lagrange multipliers $\alpha_i>0$. The training depends only on the support vectors, while all other samples away from the planes $H_+$ and $H_-$ corresponding to $\alpha_i=0$ are farther away from the two planes and they are not important.

For a support vector ${\bf x}_i$ (on the $H_-$ or $H_+$ plane), the constraining condition is

$\displaystyle y_i ({\bf x}_i^T {\bf w}+b) = 1\;\;\;\;(i \in sv)
$
where $sv$ is a set of all indices of support vectors ${\bf x}_i$. Substituting
$\displaystyle {\bf w}=\sum_{j=1}^m \alpha_j y_j {\bf x}_j=\sum_{j\in sv} \alpha_j y_j {\bf x}_j
$
into the equation above we get
$\displaystyle y_i(\sum_{j\in sv} \alpha_j y_j {\bf x}^T_i {\bf x}_j+b) = 1
$
Note that the summation only contains terms corresponding to those support vectors ${\bf x}_j$ with $\alpha_j>0$. The parameter $b$ can be found by solving the equation above based on any support vector ${\bf x}_i$ with $\alpha_i>0$.

For the optimal weight vector ${\bf w}$ and optimal $b$, we have:

$\displaystyle \vert\vert{\bf w}\vert\vert^2$ $\textstyle =$ $\displaystyle {\bf w}^T {\bf w}=\sum_{i\in sv} \alpha_i y_i {\bf x}^T_i
\sum_{j...
... =
\sum_{i\in sv} \alpha_i y_i
\sum_{j\in sv} \alpha_j y_j {\bf x}^T_i{\bf x}_j$  
  $\textstyle =$ $\displaystyle \sum_{i\in sv} \alpha_i (1-y_i b)
=\sum_{i\in sv} \alpha_i - b\sum_{i\in sv} \alpha_i y_i$  
  $\textstyle =$ $\displaystyle \sum_{i\in sv} \alpha_i$  
The last equality is due to $\sum_{i=0}^m \alpha_i y_i=0$ shown above. Recall that the distance between the two margin planes $H_+$ and $H_-$ is $2/\vert\vert{\bf w}\vert\vert$, and the margin, the distance between $H_+$ (or $H_-$) and the optimal decision plane $H_0$, is
$\displaystyle 1/\vert\vert{\bf w}\vert\vert=(\sum_{i\in sv} \alpha_i)^{-1/2}
$