Emotional voices in context: a neurobiological model of multimodal affective information processing

Phys Life Rev. 2011 Dec;8(4):383-403. doi: 10.1016/j.plrev.2011.10.002. Epub 2011 Oct 19.

Abstract

Just as eyes are often considered a gateway to the soul, the human voice offers a window through which we gain access to our fellow human beings' minds - their attitudes, intentions and feelings. Whether in talking or singing, crying or laughing, sighing or screaming, the sheer sound of a voice communicates a wealth of information that, in turn, may serve the observant listener as valuable guidepost in social interaction. But how do human beings extract information from the tone of a voice? In an attempt to answer this question, the present article reviews empirical evidence detailing the cerebral processes that underlie our ability to decode emotional information from vocal signals. The review will focus primarily on two prominent classes of vocal emotion cues: laughter and speech prosody (i.e. the tone of voice while speaking). Following a brief introduction, behavioral as well as neuroimaging data will be summarized that allows to outline cerebral mechanisms associated with the decoding of emotional voice cues, as well as the influence of various context variables (e.g. co-occurring facial and verbal emotional signals, attention focus, person-specific parameters such as gender and personality) on the respective processes. Building on the presented evidence, a cerebral network model will be introduced that proposes a differential contribution of various cortical and subcortical brain structures to the processing of emotional voice signals both in isolation and in context of accompanying (facial and verbal) emotional cues.

Publication types

  • Review

MeSH terms

  • Animals
  • Brain / physiology
  • Emotions / physiology*
  • Humans
  • Models, Neurological*
  • Neurobiology / methods*
  • Nonverbal Communication / physiology
  • Nonverbal Communication / psychology
  • Voice / physiology*