Qualitative contrast between knowledge-limited mixed-state and variable-resources models of visual change detection

J Exp Psychol Learn Mem Cogn. 2016 Oct;42(10):1507-1525. doi: 10.1037/xlm0000268. Epub 2016 Mar 7.

Abstract

We report an experiment designed to provide a qualitative contrast between knowledge-limited versions of mixed-state and variable-resources (VR) models of visual change detection. The key data pattern is that observers often respond “same” on big-change trials, while simultaneously being able to discriminate between same and small-change trials. The mixed-state model provides a natural account of this data pattern: With some probability, the observer is in a zero-memory state and is forced to guess. Thus, even on big-change trials, there is a significant probability that the observer will respond “same.” On other trials, the observer retains memory for the probed study item, and these memory-based responses allow the observer to show above-chance discrimination between same and small-change trials. By contrast, we show that important versions of the VR models that we refer to as knowledge-limited models are stymied by this simple pattern of results. In agreement with Keshvari, van den Berg, and Ma (2012, 2013), alternative knowledge-rich VR models that employ ideal-observer decision rules provide a significant improvement over the knowledge-limited VR models; however, extant versions of the knowledge-rich VR models still fall short quantitatively compared to the descriptive mixed-state model. We discuss implications of the knowledge-rich assumptions that are posited in current versions of the VR models that have been used to fit change-detection data.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Models, Psychological*
  • Photic Stimulation
  • Probability
  • Psychophysics
  • Signal Detection, Psychological*
  • Visual Perception*