next up previous
Next: The learning problem: Up: Support Vector Machines (SVM) Previous: Support Vector Machines (SVM)

Linear separation of a feature space

This is a variation of the perceptron learning algorithm. Consider a hyper plane in an n-dimensional (n-D) feature space:

\begin{displaymath}f({\bf x})={\bf x}^T {\bf w}+b=\sum_{i=0}^n x_i w_i+b=0 \end{displaymath}

where the weight vector ${\bf w}$ is normal to the plane, and $b/\vert{\bf w}\vert$ is the distance from the origin to the plane. The n-D space is partitioned by the plane into two regions. We further define a mapping function $y=sign(f({\bf x})) \in \{1,-1\}$, i.e.,

\begin{displaymath}f({\bf x})={\bf x}^T {\bf w}+b=\left\{ \begin{array}{ll} >0, ...
...& y=sign(f({\bf x}))=-1,\;{\bf x}\in N \\
\end{array} \right. \end{displaymath}

Any point ${\bf x}\in P$ on the positive side of the plane is mapped to 1, while any point ${\bf x}\in N$ on the negative side is mapped to -1. A point ${\bf x}$ of unknown class will be classified to P if $f({\bf x})>0$, or N if $f({\bf x})<0$.



Ruye Wang 2015-08-13