MS-CLSTM: Myoelectric Manipulator Gesture Recognition Based on Multi-Scale Feature Fusion CNN-LSTM Network

Biomimetics (Basel). 2024 Dec 23;9(12):784. doi: 10.3390/biomimetics9120784.

Abstract

Surface electromyography (sEMG) signals reflect the local electrical activity of muscle fibers and the synergistic action of the overall muscle group, making them useful for gesture control of myoelectric manipulators. In recent years, deep learning methods have increasingly been applied to sEMG gesture recognition due to their powerful automatic feature extraction capabilities. sEMG signals contain rich local details and global patterns, but single-scale convolutional networks are limited in their ability to capture both comprehensively, which restricts model performance. This paper proposes a deep learning model based on multi-scale feature fusion-MS-CLSTM (MS Block-ResCBAM-Bi-LSTM). The MS Block extracts local details, global patterns, and inter-channel correlations in sEMG signals using convolutional kernels of different scales. The ResCBAM, which integrates CBAM and Simple-ResNet, enhances attention to key gesture information while alleviating overfitting issues common in small-sample datasets. Experimental results demonstrate that the MS-CLSTM model achieves recognition accuracies of 86.66% and 83.27% on the Ninapro DB2 and DB4 datasets, respectively, and the accuracy can reach 89% in real-time myoelectric manipulator gesture prediction experiments. The proposed model exhibits superior performance in sEMG gesture recognition tasks, offering an effective solution for applications in prosthetic hand control, robotic control, and other human-computer interaction fields.

Keywords: deep learning; multi-scale feature fusion; real-time prediction; sEMG gesture recognition.