Efficient Embedded Machine Learning applications using Echo State Networks
Autor: | Marco D. Santambrogio, Alessio Micheli, Claudio Gallicchio, Giuseppe Franco, Luca Cerina |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
010302 applied physics
Artificial neural network Computer science business.industry Computation Bayesian optimization Reservoir computing Inference 02 engineering and technology Machine learning computer.software_genre 01 natural sciences 020202 computer hardware & architecture Recurrent neural network 0103 physical sciences 0202 electrical engineering electronic engineering information engineering Artificial intelligence business computer |
Zdroj: | DATE |
Popis: | The increasing role of Artificial Intelligence (AI) and Machine Learning (ML) in our lives brought a paradigm shift on how and where the computation is performed. Stringent latency requirements and congested bandwidth moved AI inference from Cloud space towards end-devices. This change required a major simplification of Deep Neural Networks (DNN), with memory-wise libraries or co-processors that perform fast inference with minimal power. Unfortunately, many applications such as natural language processing, time-series analysis and audio interpretation are built on a different type of Artifical Neural Networks (ANN), the so-called Recurrent Neural Networks (RNN), which, due to their intrinsic architecture, remains too complex and heavy to run efficiently on embedded devices. To solve this issue, the Reservoir Computing paradigm proposes sparse untrained non-linear networks, the Reservoir, that can embed temporal relations without some of the hindrances of Recurrent Neural Networks training, and with a lower memory usage. Echo State Networks (ESN) and Liquid State Machines are the most notable examples. In this scenario, we propose a performance comparison of a ESN, designed and trained using Bayesian Optimization techniques, against current RNN solutions. We aim to demonstrate that ESN have comparable performance in terms of accuracy, require minimal training time, and they are more optimized in terms of memory usage and computational efficiency. Preliminary results show that ESN are competitive with RNN on a simple benchmark, and both training and inference time are faster, with a maximum speed-up of 2.35x and 6.60x, respectively. |
Databáze: | OpenAIRE |
Externí odkaz: |