Speech identification in noise: Contribution of temporal, spectral, and visual speech cues

J Acoust Soc Am. 2009 Dec;126(6):3246-57. doi: 10.1121/1.3250425.

Abstract

This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation
  • Cochlear Implants
  • Cues*
  • Humans
  • Lip
  • Noise*
  • Pattern Recognition, Physiological
  • Perceptual Masking
  • Phonetics
  • Psychoacoustics
  • Psycholinguistics
  • Recognition, Psychology
  • Speech
  • Speech Acoustics
  • Speech Perception*
  • Task Performance and Analysis
  • Visual Perception*