Estimating Scale-Invariant Future in Continuous Time

Neural Comput. 2019 Apr;31(4):681-709. doi: 10.1162/neco_a_01171. Epub 2019 Feb 14.

Abstract

Natural learners must compute an estimate of future outcomes that follow from a stimulus in continuous time. Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially discounted future reward using the Bellman equation (model-free algorithms). An important drawback of model-based algorithms is that computational cost grows linearly with the amount of time to be simulated. An important drawback of model-free algorithms is the need to select a timescale required for exponential discounting. We present a computational mechanism, developed based on work in psychology and neuroscience, for computing a scale-invariant timeline of future outcomes. This mechanism efficiently computes an estimate of inputs as a function of future time on a logarithmically compressed scale and can be used to generate a scale-invariant power-law-discounted estimate of expected future reward. The representation of future time retains information about what will happen when. The entire timeline can be constructed in a single parallel operation that generates concrete behavioral and neural predictions. This computational mechanism could be incorporated into future reinforcement learning algorithms.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Animals
  • Anticipation, Psychological / physiology
  • Brain / physiology
  • Computer Simulation
  • Decision Making / physiology
  • Humans
  • Machine Learning*
  • Memory / physiology
  • Models, Neurological
  • Models, Psychological
  • Reinforcement, Psychology
  • Time
  • Time Perception / physiology