Minibatch Processing for Speed-up and Scalability of Spiking Neural Network Simulation
Autor: | Robert Kozma, Cooper Sigrist, Daniel J. Saunders, Hava T. Siegelmann, Kenneth Chaney |
---|---|
Rok vydání: | 2020 |
Předmět: |
Spiking neural network
0303 health sciences Speedup SIMPLE (military communications protocol) Computer science Bottleneck Synapse 03 medical and health sciences 0302 clinical medicine medicine.anatomical_structure Computer engineering Scalability medicine Neuron Throughput (business) 030217 neurology & neurosurgery 030304 developmental biology |
Zdroj: | IJCNN |
DOI: | 10.1109/ijcnn48605.2020.9207452 |
Popis: | Spiking neural networks (SNNs) are a promising candidate for biologically-inspired and energy efficient computation. However, their simulation is restrictively time consuming, and creates a bottleneck in developing competitive training methods with potential deployment on neuromorphic hardware platforms, even on simple tasks. To address this issue, we provide an implementation of mini-batch processing applied to clock-based SNN simulation, leading to drastically increased data throughput. To our knowledge, this is the first general-purpose implementation of mini-batch processing in a spiking neural networks simulator, which works with arbitrary neuron and synapse models. We demonstrate nearly constant-time scaling with batch size on a simulation setup (up to GPU memory limits), and showcase the effectiveness of large batch sizes in two SNN application domains, resulting in ≈880X and ≈24X reductions in wall-clock time respectively. Different parameter reduction techniques are shown to produce different learning outcomes in a simulation of networks trained with spike-timing-dependent plasticity. Machine learning practitioners and biological modelers alike may benefit from the drastically reduced simulation time and increased iteration speed this method enables. |
Databáze: | OpenAIRE |
Externí odkaz: |