Last week, the
G-Node workshop on neuronal GPU computing took place at LMU Munich. I had the pleasure organizing the scientific part, while Christian Kellner and Thomas Wachtler from
G-Node did an extremely good job taking care of local organization (with solid support from lovely Manuela Brandenburg).
We had a one-day symposium with talks, followed by a two-day hands-on developer workshop.
First speaker of the symposium was
Romain Brette from
ENS Paris, who presented a graph-theoretical approach to optimizing the memory arrangement of neuronal networks for efficient simulation on GPUs.
Giovanni Idili from the openwork project presented their work on a simulation of
C. elegans. The cool thing about that project is that the simulation is not restricted to the neuronal part of that worm, but actually linking worm physics to worm physiology. Very interesting work, as it is one of the still very few approaches that aim at an embodiment of a simulated neuronal network. I also learnt a lot on software engineering, as Giovanni described their approach to linking the various simulations using a
service-oriented architecture based on
OSGi-bundles.
Afterwards,
Dave Higgins from ENS Paris gave an account on how he learnt to use
OpenCL the hard way. It was interesting to see what OpenCL requires you to do that
CUDA doesn't. Also, he somehow paved the way for the code generation talks by Thomas Nowotny and Damien Drix in the afternoon.
The last speaker of the morning session was Javier Baladron from
Olivier Faugeras' workgroup at INRA Sophia-Antipolis. His project dealt with stochastic neuronal models, in which random number generation soon turned out to be the major bottleneck. They could overcome the bottleneck by using the
cuRAND library on a cluster of 14 (fourteen!) GPUs.
After lunch,
Dan Goodman gave a short and to-the-point overview on the use of GPU computing in the
brian simulator, including the integration of the
NeMo Simulator and the model fitting toolbox.
Andreas Fidjeland (Imperial College London), the developer of
NeMo presented his algorithm for efficient delivery of synaptic events on GPUs, considering memory layout and access patterns. He also showed some convincing benchmark data.
After a short coffee break
Thomas Nowotny (
Sussex University) gave a very nice talk on his experience which neuronal GPU computing. Thomas first encounter with neuronal GPU computing dates back to 2009 (or was it 2007?), and he experienced all the improvements made on the software and hardware level since then. It was also very instructive to see how he ended up with doing code generation on GPUs instead of trying to pass arguments to kernels, somehow complementing Dave's talk from this morning.
Damien Drix from Eilif Muller's group at EPFL Lausanne showed an impressive code generation framework (including a nice GUI with sliders and all :) ) that converts
NeuroML into code to be compiled and executed on GPUs .
The final slot of the symposium belonged to
Pierre Yger (Imperial College London), who demonstrated the power of
PyNN for GPU-based neuronal simulations. He nicely layed out the advantage of being able to validate (or falsify!) your simulation results by running the exact same network on different simulators. One of the most impressive things he showed was how strongly some simulations were affected by factors like the timestep used for numerical integration, the integration algorithm itself, of even details of the implementation of models and simulators.