Electroencephalogram (EEG) emotion recognition plays an important role in human-computer interaction, and a higher recognition accuracy can improve the user experience. In recent years, domain adaptive methods in transfer learning have been used to construct a general emotion recognition model to deal with domain difference among different subjects and sessions. However, it is still challenging to effectively reduce domain difference in domain adaptation. In this paper, we propose a Multiple-Source Distribution Deep Adaptive Feature Norm Network for EEG emotion recognition, which reduce domain difference by improving the transferability of task-specific features. In detail, the domain adaptive method of our model employs a three-layer network topology, inserts Adaptive Feature Norm to self-supervised adjustment between different layers, and combines a multiple-kernel selection approach to mean embedding matching. The method proposed in this paper achieves the best classification performance in the SEED and SEED-IV datasets. In SEED dataset, the average accuracy of cross-subject and cross-session experiments is 85.01 and 91.93%, respectively. In SEED-IV dataset, the average accuracy is 58.81% in cross-subject experiments and 59.51% in cross-session experiments. The experimental results demonstrate that our method can effectively reduce the domain difference and improve the emotion recognition accuracy.
Keywords: Domain adaptation; EEG; Emotion recognition; Transfer learning.
© The Author(s), under exclusive licence to Springer Nature B.V. 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.