Interpreting CNN models for musical instrument recognition using multi-spectrogram heatmap analysis: a preliminary study

Front Artif Intell. 2024 Dec 18:7:1499913. doi: 10.3389/frai.2024.1499913. eCollection 2024.

Abstract

Introduction: Musical instrument recognition is a critical component of music information retrieval (MIR), aimed at identifying and classifying instruments from audio recordings. This task poses significant challenges due to the complexity and variability of musical signals.

Methods: In this study, we employed convolutional neural networks (CNNs) to analyze the contributions of various spectrogram representations-STFT, Log-Mel, MFCC, Chroma, Spectral Contrast, and Tonnetz-to the classification of ten different musical instruments. The NSynth database was used for training and evaluation. Visual heatmap analysis and statistical metrics, including Difference Mean, KL Divergence, JS Divergence, and Earth Mover's Distance, were utilized to assess feature importance and model interpretability.

Results: Our findings highlight the strengths and limitations of each spectrogram type in capturing distinctive features of different instruments. MFCC and Log-Mel spectrograms demonstrated superior performance across most instruments, while others provided insights into specific characteristics.

Discussion: This analysis provides some insights into optimizing spectrogram-based approaches for musical instrument recognition, offering guidance for future model development and improving interpretability through statistical and visual analyses.

Keywords: convolutional neural networks; feature extraction; feature maps; heatmaps; music information retrieval; musical instrument recognition; pattern recognition; spectrogram analysis.

Grants and funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.