Constructing Neuronal Network Models in Massively Parallel Environments

Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputer...

Full description

Bibliographic Details
Main Authors: Tammo Ippen, Jochen M. Eppler, Hans E. Plesser, Markus Diesmann
Format: Article
Language:English
Published: Frontiers Media S.A. 2017-05-01
Series:Frontiers in Neuroinformatics
Subjects:
Online Access:http://journal.frontiersin.org/article/10.3389/fninf.2017.00030/full
id doaj-bd0982b185d2409880f129fcc2d3926d
record_format Article
spelling doaj-bd0982b185d2409880f129fcc2d3926d2020-11-24T22:58:55ZengFrontiers Media S.A.Frontiers in Neuroinformatics1662-51962017-05-011110.3389/fninf.2017.00030246248Constructing Neuronal Network Models in Massively Parallel EnvironmentsTammo Ippen0Tammo Ippen1Jochen M. Eppler2Hans E. Plesser3Hans E. Plesser4Hans E. Plesser5Markus Diesmann6Markus Diesmann7Markus Diesmann8Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research CentreJülich, GermanyFaculty of Science and Technology, Norwegian University of Life SciencesÅs, NorwaySimulation Laboratory Neuroscience—Bernstein Facility Simulation and Database Technology, Institute for Advanced Simulation, Jülich Research Centre and JARAJülich, GermanyInstitute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research CentreJülich, GermanyFaculty of Science and Technology, Norwegian University of Life SciencesÅs, NorwayDepartment of Biosciences, Centre for Integrative Neuroplasticity, University of OsloOslo, NorwayInstitute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research CentreJülich, GermanyDepartment of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen UniversityAachen, GermanyDepartment of Physics, Faculty 1, RWTH Aachen UniversityAachen, GermanyRecent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.http://journal.frontiersin.org/article/10.3389/fninf.2017.00030/fullmulti-threadingmulti-core processormemory allocationsupercomputerlarge-scale simulationparallel computing
collection DOAJ
language English
format Article
sources DOAJ
author Tammo Ippen
Tammo Ippen
Jochen M. Eppler
Hans E. Plesser
Hans E. Plesser
Hans E. Plesser
Markus Diesmann
Markus Diesmann
Markus Diesmann
spellingShingle Tammo Ippen
Tammo Ippen
Jochen M. Eppler
Hans E. Plesser
Hans E. Plesser
Hans E. Plesser
Markus Diesmann
Markus Diesmann
Markus Diesmann
Constructing Neuronal Network Models in Massively Parallel Environments
Frontiers in Neuroinformatics
multi-threading
multi-core processor
memory allocation
supercomputer
large-scale simulation
parallel computing
author_facet Tammo Ippen
Tammo Ippen
Jochen M. Eppler
Hans E. Plesser
Hans E. Plesser
Hans E. Plesser
Markus Diesmann
Markus Diesmann
Markus Diesmann
author_sort Tammo Ippen
title Constructing Neuronal Network Models in Massively Parallel Environments
title_short Constructing Neuronal Network Models in Massively Parallel Environments
title_full Constructing Neuronal Network Models in Massively Parallel Environments
title_fullStr Constructing Neuronal Network Models in Massively Parallel Environments
title_full_unstemmed Constructing Neuronal Network Models in Massively Parallel Environments
title_sort constructing neuronal network models in massively parallel environments
publisher Frontiers Media S.A.
series Frontiers in Neuroinformatics
issn 1662-5196
publishDate 2017-05-01
description Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.
topic multi-threading
multi-core processor
memory allocation
supercomputer
large-scale simulation
parallel computing
url http://journal.frontiersin.org/article/10.3389/fninf.2017.00030/full
work_keys_str_mv AT tammoippen constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT tammoippen constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT jochenmeppler constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT hanseplesser constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT hanseplesser constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT hanseplesser constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT markusdiesmann constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT markusdiesmann constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT markusdiesmann constructingneuronalnetworkmodelsinmassivelyparallelenvironments
_version_ 1725646063817195520