Massively Parallel Neuronal Network Model Construction

Autor: Ippen, Tammo, Eppler, Jochen Martin, Diesmann, Markus, Plesser
Jazyk: angličtina
Rok vydání: 2015
Zdroj: Nordic Neuroscience Conference, Trondheim, Norway, 2015-06-10-2015-06-12
Popis: Biological neuronal networks models can be investigated with the NEST simulator (Gewaltig and Diesmann, 2007). Being a hybrid OpenMP and MPI parallel application, NEST is already capable of simulating neuronal networks of spiking point neurons of the size of 1% of the human brain (Kunkel et al., 2014). Beside efficient parallel simulation of these networks, their construction becomes more relevant. Current neuronal network sizes span multiple orders of magnitude and future investigations of the brain will require more complex and larger networks. While Kunkel et al. (2014) presented highly optimized data structures that allow the representation and simulation of neuronal networks on the scale of rodent and cat brains, the time required to create these networks in the simulator becomes impractical. Hence, efficient parallel construction algorithms, which exploit the capabilities of current and future compute hardware, are necessary to perform these large scale simulations. We present here our ongoing work to provide efficient and scalable algorithms to construct brain-scale neuronal networks.The number of cores on single compute nodes are constantly increasing. When using MPI-based parallelization only, each rank has to store MPI-related data-structures, which entails an overhead compared to a shared memory (OpenMP) parallelization. However, previous implementations of parallelized neuronal network construction did not scale well when using OpenMP. We find that this is caused by the massive parallel memory allocation during the wiring phase. Using memory allocators specialized for thread-parallel memory allocation (Evans, 2006, Ghemawat, 2007, Kukanov, 2007) makes thread-parallel wiring scalable again.Constructing neuronal networks in large compute-cluster- and supercomputer-scenarios shows sub- optimal wiring performance as well. We find that most of the wiring time is spent by idling none-local target neurons. By refactoring the algorithms to enable the iteration over local target neurons only, we achieve good wiring performance in these scenarios.With these optimizations in place, we gain scalable construction of neuronal networks from single compute node to supercomputer simulations. On concrete network models we observed twenty times faster neuronal network construction. These performance enhancements will allow computational neuroscientists to perform significantly more comprehensive in silico experiments within the tight limits of available supercomputer resources. Studies on the relation between network structure and dynamics will benefit especially, since these typically require the randomized instantiation of large numbers of networks. Experiments scanning network parameter space will benefit equally. Finally, by exploiting energy-hungry supercomputer resources more efficiently, our work also helps to reduce the overall energy consumption and thus the carbon footprint of computational neuroscience.
Databáze: OpenAIRE