next up previous
Next: About this document ... Up: Appendix Previous: Competitive learning

Balancing the competition

If the input patterns form a continuum instead of a set of nicely separable clusters (as in the case of the network models for both MT and MST), no obvious boundaries can be found to partition the feature space. In this case, the results of competitive learning may be very different, anywhere between two extremes: (a) the continuum of input patterns may be divided relatively evenly, but randomly, into a set of clusters, each represented by a particular output node, or (b) the entire continuum is represented by one output node and all other nodes become dead nodes and never turn on.

To achieve the preferred result of (a), the competitive learning can be modified to ensure some winning chance for every node. We can include a bias term while computing the output of each node during competition:

\begin{displaymath}y_j=\sum_i w_{ij}x_i+b_j \end{displaymath}

Specifically, the bias term can be set to ([75])

\begin{displaymath}b_j=\gamma (1/n-f_j) \end{displaymath}

where $\gamma$ is a constant factor, n is the total number of nodes in the competition, and fj is the frequency of winning for node Nj. Since bjis proportional to the difference between the equal winning probability 1/n and the actual winning frequency fj, it has the tendency to make winning harder for the frequent winners and easier for the frequent losers. Thus a more balanced competition can be achieved.


next up previous
Next: About this document ... Up: Appendix Previous: Competitive learning
Ruye Wang
2000-04-25