Computing nasalance with MFCCs and Convolutional Neural Networks

PLoS One. 2024 Dec 31;19(12):e0315452. doi: 10.1371/journal.pone.0315452. eCollection 2024.

Abstract

Nasalance is a valuable clinical biomarker for hypernasality. It is computed as the ratio of acoustic energy emitted through the nose to the total energy emitted through the mouth and nose (eNasalance). A new approach is proposed to compute nasalance using Convolutional Neural Networks (CNNs) trained with Mel-Frequency Cepstrum Coefficients (mfccNasalance). mfccNasalance is evaluated by examining its accuracy: 1) when the train and test data are from the same or from different dialects; 2) with test data that differs in dynamicity (e.g. rapidly produced diadochokinetic syllables versus short words); and 3) using multiple CNN configurations (i.e. kernel shape and use of 1 × 1 pointwise convolution). Dual-channel Nasometer speech data from healthy speakers from different dialects: Costa Rica, more(+) nasal, Spain and Chile, less(-) nasal, are recorded. The input to the CNN models were sequences of 39 MFCC vectors computed from 250 ms moving windows. The test data were recorded in Spain and included short words (-dynamic), sentences (+dynamic), and diadochokinetic syllables (+dynamic). The accuracy of a CNN model was defined as the Spearman correlation between the mfccNasalance for that model and the perceptual nasality scores of human experts. In the same-dialect condition, mfccNasalance was more accurate than eNasalance independently of the CNN configuration; using a 1 × 1 kernel resulted in increased accuracy for +dynamic utterances (p < .000), though not for -dynamic utterances. The kernel shape had a significant impact for -dynamic utterances (p < .000) exclusively. In the different-dialect condition, the scores were significantly less accurate than in the same-dialect condition, particularly for Costa Rica trained models. We conclude that mfccNasalance is a flexible and useful alternative to eNasalance. Future studies should explore how to optimize mfccNasalance by selecting the most adequate CNN model as a function of the dynamicity of the target speech data.

MeSH terms

  • Adult
  • Chile
  • Costa Rica
  • Female
  • Humans
  • Language
  • Male
  • Neural Networks, Computer*
  • Nose / physiology
  • Speech / physiology
  • Speech Acoustics
  • Speech Production Measurement / methods

Grants and funding

This research was funded by the Spanish MINISTERIO DE CIENCIA, INNOVACIÓN YUNIVERSIDADES, grant number PID2021-126366OB-I00. This funding received by Enrique Nava and Ignacio Moreno-Torres. This research was also funded by the Spanish JUNTA DE ANDALUCIA, grant number UMA18FEDERJA021. This funding received by Enrique Nava and Ignacio Moreno-Torres.