MI-Mamba: A hybrid motor imagery electroencephalograph classification model with Mamba's global scanning

Ann N Y Acad Sci. 2025 Jan 22. doi: 10.1111/nyas.15288. Online ahead of print.

Abstract

Deep learning has revolutionized electroencephalograph (EEG) decoding, with convolutional neural networks (CNNs) being a predominant tool. However, CNNs struggle with long-term dependencies in sequential EEG data. Models like long short-term memory and transformers improve performance but still face challenges of computational efficiency and long sequences. Mamba, a state space model-based method, excels in modeling long sequences. To overcome the limitations of existing EEG decoding models and exploit Mamba's potential in EEG analysis, we propose MI-Mamba, a model integrating CNN with Mamba for motor imagery (MI) data decoding. MI-Mamba processes multi-channel EEG signals through a single convolutional layer to capture spatial features in the local temporal domain, followed by a Mamba module that processes global temporal features. A fully connected, layer-based classifier is used to derive classification results. Evaluated on two public MI datasets, MI-Mamba achieves 80.59% accuracy in the four-class MI task of the BCI Competition IV 2a dataset and 84.42% in the two-class task of the BCI Competition IV 2b dataset, while reducing parameter count by nearly six times compared to the most advanced previous models. These results highlight MI-Mamba's effectiveness in MI decoding and its potential as a new backbone for general EEG decoding.

Keywords: Mamba; brain−computer interface (BCI); deep learning; electroencephalograph (EEG); motor imagery (MI).