Aggregated learning: a vector-quantization approach to learning neural network classifiers
Autor: | Hongyu Guo, Richong Zhang, Yongyi Mao, Masoumeh Soflaei, Ali Al-Bashabsheh |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science::Machine Learning Computer Science - Machine Learning Computer science character recognition Machine Learning (stat.ML) Machine Learning (cs.LG) Statistics - Machine Learning signal distortion image coding Artificial neural network business.industry Quantization (signal processing) equivalence classes variational techniques Vector quantization Information bottleneck method General Medicine classification (of information) neural networks image recognition vector quantization Artificial intelligence text processing business Feature learning electric distortion |
Zdroj: | AAAI |
Popis: | We consider the problem of learning a neural network classifier. Under the information bottleneck (IB) principle, we associate with this classification problem a representation learning problem, which we call "IB learning". We show that IB learning is, in fact, equivalent to a special class of the quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a "vector quantization" approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework, "Aggregated Learning", for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks. Proof of theoretical results are provided |
Databáze: | OpenAIRE |
Externí odkaz: |