Building upon previous experiments can be used to accomplish new goals. In computing, it is imperative to reuse computer code to continue development on specific projects. Reproducibility is a fundamental building block in science, and experimental reproducibility issues have recently been of great concern. It may be surprising that reproducibility is also of concern in computational science. In this study, we used a previously published code to investigate neural network activity and we were unable to replicate our original results. This led us to investigate the code in question, and we found that several different aspects, attributable to floating-point arithmetic, were the cause of these replicability issues. Furthermore, we uncovered other manifestations of this lack of replicability in other parts of the computation with this model. The simulated model is a standard system of ordinary differential equations, very much like those commonly used in computational neuroscience. Thus, we believe that other researchers in the field should be vigilant when using such models and avoid drawing conclusions from calculations if their qualitative results can be substantially modified through non-reproducible circumstances.
Keywords: Floating-point precision; Neural network; Numerical simulation; Replicability.