Markov processes follow from the principle of maximum caliber

J Chem Phys. 2012 Feb 14;136(6):064108. doi: 10.1063/1.3681941.

Abstract

Markov models are widely used to describe stochastic dynamics. Here, we show that Markov models follow directly from the dynamical principle of maximum caliber (Max Cal). Max Cal is a method of deriving dynamical models based on maximizing the path entropy subject to dynamical constraints. We give three different cases. First, we show that if constraints (or data) are given in the form of singlet statistics (average occupation probabilities), then maximizing the caliber predicts a time-independent process that is modeled by identical, independently distributed random variables. Second, we show that if constraints are given in the form of sequential pairwise statistics, then maximizing the caliber dictates that the kinetic process will be Markovian with a uniform initial distribution. Third, if the initial distribution is known and is not uniform we show that the only process that maximizes the path entropy is still the Markov process. We give an example of how Max Cal can be used to discriminate between different dynamical models given data.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Entropy
  • Markov Chains*
  • Models, Statistical