Fast Support Vector Classification for Large-Scale Problems.

Autor: Akram-Ali-Hammouri, Ziad, Fernandez-Delgado, Manuel, Cernadas, Eva, Barro, Senen
Předmět:
Zdroj: IEEE Transactions on Pattern Analysis & Machine Intelligence; Oct2022, Vol. 44 Issue 10, p6184-6195, 12p
Abstrakt: The support vector machine (SVM) is a very important machine learning algorithm with state-of-the-art performance on many classification problems. However, on large datasets it is very slow and requires much memory. To solve this defficiency, we propose the fast support vector classifier (FSVC) that includes: 1) an efficient closed-form training free of any numerical iterative procedure; 2) a small collection of class prototypes that avoids to store in memory an excessive number of support vectors; and 3) a fast method that selects the spread of the radial basis function kernel directly from data, without classifier execution nor iterative hyper-parameter tuning. The memory requirements of FSVC are very low, spending in average only 6 $\cdot 10^{-7}$ · 10 - 7 sec. per pattern, input and class, and processing datasets up to 31 millions of patterns, 30,000 inputs and 131 classes in less than 1.5 hours (less than 3 hours with only 2GB of RAM). In average, the FSVC is 10 times faster, requires 12 times less memory and achieves 4.7 percent more performance than Liblinear, that fails on the 4 largest datasets by lack of memory, being 100 times faster and achieving only 6.7 percent less performance than Libsvm. The time spent by FSVC only depends on the dataset size and thus it can be accurately estimated for new datasets, while Libsvm or Liblinear are much slower on “difficult” datasets, even if they are small. The FSVC adjusts its requirements to the available memory, classifying large datasets in computers with limited memory. Code for the proposed algorithm in the Octave scientific programming language is provided.1 [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index