Reference-point centering and range-adaptation enhance human reinforcement learning at the cost of irrational preferences

Nat Commun. 2018 Oct 29;9(1):4503. doi: 10.1038/s41467-018-06781-2.

Abstract

In economics and perceptual decision-making contextual effects are well documented, where decision weights are adjusted as a function of the distribution of stimuli. Yet, in reinforcement learning literature whether and how contextual information pertaining to decision states is integrated in learning algorithms has received comparably little attention. Here, we investigate reinforcement learning behavior and its computational substrates in a task where we orthogonally manipulate outcome valence and magnitude, resulting in systematic variations in state-values. Model comparison indicates that subjects' behavior is best accounted for by an algorithm which includes both reference point-dependence and range-adaptation-two crucial features of state-dependent valuation. In addition, we find that state-dependent outcome valuation progressively emerges, is favored by increasing outcome information and correlated with explicit understanding of the task structure. Finally, our data clearly show that, while being locally adaptive (for instance in negative valence and small magnitude contexts), state-dependent valuation comes at the cost of seemingly irrational choices, when options are extrapolated out from their original contexts.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adolescent
  • Adult
  • Algorithms
  • Attention
  • Behavior / physiology
  • Computer Simulation
  • Decision Making / physiology
  • Female
  • Humans
  • Learning / physiology*
  • Male
  • Models, Neurological
  • Reference Values*
  • Reinforcement, Psychology*
  • Reward
  • Young Adult