Contextual modulation of value signals in reward and punishment learning

Nat Commun. 2015 Aug 25:6:8096. doi: 10.1038/ncomms9096.

Abstract

Compared with reward seeking, punishment avoidance learning is less clearly understood at both the computational and neurobiological levels. Here we demonstrate, using computational modelling and fMRI in humans, that learning option values in a relative--context-dependent--scale offers a simple computational solution for avoidance learning. The context (or state) value sets the reference point to which an outcome should be compared before updating the option value. Consequently, in contexts with an overall negative expected value, successful punishment avoidance acquires a positive value, thus reinforcing the response. As revealed by post-learning assessment of options values, contextual influences are enhanced when subjects are informed about the result of the forgone alternative (counterfactual information). This is mirrored at the neural level by a shift in negative outcome encoding from the anterior insula to the ventral striatum, suggesting that value contextualization also limits the need to mobilize an opponent punishment learning system.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Avoidance Learning / physiology*
  • Bayes Theorem
  • Brain / physiology
  • Brain Mapping
  • Cerebral Cortex / physiology
  • Computer Simulation
  • Decision Making / physiology*
  • Female
  • Functional Neuroimaging
  • Humans
  • Image Processing, Computer-Assisted
  • Learning / physiology
  • Magnetic Resonance Imaging
  • Male
  • Models, Neurological
  • Prefrontal Cortex / physiology*
  • Punishment*
  • Reward*
  • Ventral Striatum / physiology*
  • Young Adult