Parameterizing Large-Scale Neuroscience Simulations
A great deal of progress has been made in recent years on the design of biologically realistic neural network models and neuromorphic algorithms for solving computation problems in a brain-inspired way. Significant challenges remain, however, in how to handle the complexity of these algorithms.
To address the challenge of automating the construction of brain simulations, we and our collaborators in the Krasnow’s Computational Neuroanatomy Group and UC Irvine’s Cognitive Anteater Robotics Lab propose an approach that leverages the optimization capabilities of evolutionary computation, and takes advantage of the parallel nature of graphical processing units (GPUs). Because Spiking Neural Network (SNN) simulations and fitness evaluation are time consuming operations, optimization time can be significantly reduced if multiple SNNs run concurrently, each with distinct parameters. In order to automate the parameter tuning process, one must define an objective function to be optimized. Typically, that takes the form of minimizing the error between simulated and experimental data.