Seeing is believing: Brain-inspired modular training for mechanistic interpretability

Z Liu, E Gan, M Tegmark - Entropy, 2023 - mdpi.com
Entropy, 2023mdpi.com
We introduce Brain-Inspired Modular Training (BIMT), a method for making neural networks
more modular and interpretable. Inspired by brains, BIMT embeds neurons in a geometric
space and augments the loss function with a cost proportional to the length of each neuron
connection. This is inspired by the idea of minimum connection cost in evolutionary biology,
but we are the first the combine this idea with training neural networks with gradient descent
for interpretability. We demonstrate that BIMT discovers useful modular neural networks for …
We introduce Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable. Inspired by brains, BIMT embeds neurons in a geometric space and augments the loss function with a cost proportional to the length of each neuron connection. This is inspired by the idea of minimum connection cost in evolutionary biology, but we are the first the combine this idea with training neural networks with gradient descent for interpretability. We demonstrate that BIMT discovers useful modular neural networks for many simple tasks, revealing compositional structures in symbolic formulas, interpretable decision boundaries and features for classification, and mathematical structure in algorithmic datasets. Qualitatively, BIMT-trained networks have modules readily identifiable by the naked eye, but regularly trained networks seem much more complicated. Quantitatively, we use Newman’s method to compute the modularity of network graphs; BIMT achieves the highest modularity for all our test problems. A promising and ambitious future direction is to apply the proposed method to understand large models for vision, language, and science.
MDPI
Showing the best result for this search. See all results