Arbitrary Norm Support Vector Machines
Autor: | Kaizhu Huang, Irwin King, Michael R. Lyu, Danian Zheng |
---|---|
Rok vydání: | 2009 |
Předmět: |
Databases
Factual Structured support vector machine business.industry Cognitive Neuroscience Online machine learning Bayes Theorem Machine learning computer.software_genre Support vector machine Relevance vector machine Arts and Humanities (miscellaneous) Artificial Intelligence Reference Values Polynomial kernel Norm (mathematics) Least squares support vector machine Humans Learning Sequential minimal optimization Neural Networks Computer Artificial intelligence business computer Algorithms Mathematics |
Zdroj: | Neural Computation. 21:560-582 |
ISSN: | 1530-888X 0899-7667 |
DOI: | 10.1162/neco.2008.12-07-667 |
Popis: | Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L∞-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, − 9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster. |
Databáze: | OpenAIRE |
Externí odkaz: |