Statistical Properties of Incremental SVMs Storing Support Vectors
Takemasa YAMASAKI, Kazushi IKEDA
Support vector machines (SVM) are known to result in a quadratic programming problem, that requires a large computational complexity. One way to reduce this requirement is to apply the idea of incremental learning to SVMs, where only a subset of given examples is stored in an iteration and the complexity is linear with respect to the number of examples on average. When an incremental SVM stores the set of support vectors, it has a lower performance than an SVM in batch mode since examples that are not support vectors when given but become ones later are neglected. This paper indirectly evaluates the degradation of the SVM performance through the size of admissible region under ‘disc approximation’ and verifies results using computer simulations.