Introduction: Musical instrument recognition is a critical component of music information retrieval (MIR), aimed at identifying and classifying instruments from audio recordings. This task poses significant challenges due to the complexity and variability of musical signals.
Methods: In this study, we employed convolutional neural networks (CNNs) to analyze the contributions of various spectrogram representations-STFT, Log-Mel, MFCC, Chroma, Spectral Contrast, and Tonnetz-to the classification of ten different musical instruments. The NSynth database was used for training and evaluation. Visual heatmap analysis and statistical metrics, including Difference Mean, KL Divergence, JS Divergence, and Earth Mover's Distance, were utilized to assess feature importance and model interpretability.
Results: Our findings highlight the strengths and limitations of each spectrogram type in capturing distinctive features of different instruments. MFCC and Log-Mel spectrograms demonstrated superior performance across most instruments, while others provided insights into specific characteristics.
Discussion: This analysis provides some insights into optimizing spectrogram-based approaches for musical instrument recognition, offering guidance for future model development and improving interpretability through statistical and visual analyses.
Keywords: convolutional neural networks; feature extraction; feature maps; heatmaps; music information retrieval; musical instrument recognition; pattern recognition; spectrogram analysis.
Copyright © 2024 Chen, Ghobakhlou and Narayanan.