Individual Classification of Single Trial EEG Traces to Discriminate Brain responses to Speech with Different Signal-to-Noise Ratios

Annu Int Conf IEEE Eng Med Biol Soc. 2018 Jul:2018:987-990. doi: 10.1109/EMBC.2018.8512491.

Abstract

To gain knowledge of listening effort in adverse situations, it is important to know how the brain processes speech with different signal-to-noise ratios (SNR). To investigate this, we conducted a study with 33 hearing impaired individuals, whose electroencephalographic (EEG) signals were recorded while listening to sentences presented in high and low levels of background noise. To discriminate between these two conditions, features from the 64-channel EEG recordings were extracted using the power spectrum obtained by a Fast Fourier Transform. Features vectors were selected on an individual basis by using the statistical R2 approach. The selected features were then classified by a Support Vector Machine with a nonlinear kernel, and the classification results were validated using a leave-one-out strategy, and presented an average classification accuracy over all 33 subjects of 83% (SD=6.4%). The most discriminative features were selected in the high-beta (19-30 Hz) and gamma (30-45 Hz) bands. These results suggest that specific brain oscillations are involved in addressing background noise during speech stimuli, which may reflect differences in cognitive load between the conditions of low and high background noise.

MeSH terms

  • Aged
  • Auditory Perception
  • Brain / physiology*
  • Electroencephalography*
  • Female
  • Humans
  • Male
  • Middle Aged
  • Noise
  • Signal-To-Noise Ratio*
  • Speech
  • Speech Perception*
  • Support Vector Machine