Injective Domain Knowledge in Neural Networks for Transprecision Computing

Autor: Michele Lombardi, Federico Baldo, Michela Milano, Andrea Borghesi
Přispěvatelé: Giuseppe Nicosia, Varun Kumar Ojha, Emanuele La Malfa, Giorgio Jansen, Vincenzo Sciacca, Panos M. Pardalos, Giovanni Giuffrida, Renato Umeton, Andrea Borghesi, Federico Baldo, Michele Lombardi, Michela Milano
Jazyk: angličtina
Rok vydání: 2020
Předmět:
Zdroj: Machine Learning, Optimization, and Data Science ISBN: 9783030645823
LOD (1)
Machine Learning, Optimization, and Data Science-6th International Conference, LOD 2020, Siena, Italy, July 19–23, 2020, Revised Selected Papers, Part I
Lecture Notes in Computer Science
Lecture Notes in Computer Science-Machine Learning, Optimization, and Data Science
ISSN: 0302-9743
1611-3349
Popis: Machine Learning (ML) models are very effective in many learning tasks, due to the capability to extract meaningful information from large data sets. Nevertheless, there are learning problems that cannot be easily solved relying on pure data, e.g. scarce data or very complex functions to be approximated. Fortunately, in many contexts domain knowledge is explicitly available and can be used to train better ML models. This paper studies the improvements that can be obtained by integrating prior knowledge when dealing with a context-specific, non-trivial learning task, namely precision tuning of transprecision computing applications. The domain information is injected in the ML models in different ways: I) additional features, II) ad-hoc graph-based network topology, III) regularization schemes. The results clearly show that ML models exploiting problem-specific information outperform the data-driven ones, with an average improvement around 38%.
Databáze: OpenAIRE