Listening with generative models

Cognition. 2024 Dec:253:105874. doi: 10.1016/j.cognition.2024.105874. Epub 2024 Aug 30.

Abstract

Perception has long been envisioned to use an internal model of the world to explain the causes of sensory signals. However, such accounts have historically not been testable, typically requiring intractable search through the space of possible explanations. Using auditory scenes as a case study, we leveraged contemporary computational tools to infer explanations of sounds in a candidate internal generative model of the auditory world (ecologically inspired audio synthesizers). Model inferences accounted for many classic illusions. Unlike traditional accounts of auditory illusions, the model is applicable to any sound, and exhibited human-like perceptual organization for real-world sound mixtures. The combination of stimulus-computability and interpretable model structure enabled 'rich falsification', revealing additional assumptions about sound generation needed to account for perception. The results show how generative models can account for the perception of both classic illusions and everyday sensory signals, and illustrate the opportunities and challenges involved in incorporating them into theories of perception.

Keywords: Auditory scene analysis; Bayesian inference; Grouping; Illusions; Natural sounds; Perception; Perceptual organization; Probabilistic program; World model.

MeSH terms

  • Acoustic Stimulation
  • Auditory Perception* / physiology
  • Humans
  • Illusions / physiology
  • Models, Psychological