Reservoir Memory Machines as Neural Computers
Autor: | Terrence C. Stewart, Barbara Hammer, Alexander Schulz, Benjamin Paassen |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Networks and Communications Computer science Computation Machine Learning (stat.ML) Machine Learning (cs.LG) echo state networks Regular language Memory Artificial Intelligence Statistics - Machine Learning Graph traversal memory-augmented neural networks Explicit memory Neural and Evolutionary Computing (cs.NE) Differentiable function finite state machines (FSMs) Language Artificial neural network Computers Echo (computing) Differentiable neural computers (DNCs) Computer Science - Neural and Evolutionary Computing reservoir computing neural Turing machines Computer Science Applications Computer engineering Benchmark (computing) Neural Networks Computer State (computer science) Echo state network Software |
DOI: | 10.1109/tnnls.2021.3094139 |
Popis: | Differentiable neural computers extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks such as graph traversal. However, such models are difficult to train, requiring long training times and large datasets. In this work, we achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently, namely an echo state network with an explicit memory without interference. This extension enables echo state networks to recognize all regular languages, including those that contractive echo state networks provably can not recognize. Further, we demonstrate experimentally that our model performs comparably to its fully-trained deep version on several typical benchmark tasks for differentiable neural computers. In print at the special issue 'New Frontiers in Extremely Efficient Reservoir Computing' of IEEE TNNLS |
Databáze: | OpenAIRE |
Externí odkaz: |