Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech

PLoS One. 2013 May 29;8(5):e64489. doi: 10.1371/journal.pone.0064489. Print 2013.

Abstract

Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI). An intersubject-correlation (ISC) map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC) analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation
  • Adult
  • Analysis of Variance
  • Auditory Cortex / physiology*
  • Auditory Perception / physiology*
  • Brain / physiology
  • Brain Mapping
  • Female
  • Functional Laterality / physiology
  • Humans
  • Linear Models
  • Magnetic Resonance Imaging
  • Male
  • Phonetics
  • Sound*
  • Speech Perception / physiology
  • Speech*
  • Young Adult

Grants and funding

The study was supported by the Academy of Finland (National Centers of Excellence Program 2006–2011; grant #259752; grant #263800), European Research Council Advanced Grant #232946 (to R.H.), the aivoAALTO project of the Aalto University, and Päivikki and Sakari Sohlberg Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.