Verbal shadowing and visual interference in spatial memory

PLoS One. 2013 Sep 3;8(9):e74177. doi: 10.1371/journal.pone.0074177. eCollection 2013.

Abstract

Spatial memory is thought to be organized along experienced views and allocentric reference axes. Memory access from different perspectives typically yields V-patterns for egocentric encoding (monotonic decline in performance along with the angular deviation from the experienced perspectives) and W-patterns for axes encoding (better performance along parallel and orthogonal perspectives than along oblique perspectives). We showed that learning an object array with a verbal secondary task reduced W-patterns compared with learning without verbal shadowing. This suggests that axes encoding happened in a verbal format; for example, by rows and columns. Alternatively, general cognitive load from the secondary task prevented memorizing relative to a spatial axis. Independent of encoding, pointing with a surrounding room visible yielded stronger W-patterns compared with pointing with no room visible. This suggests that the visible room geometry interfered with the memorized room geometry. With verbal shadowing and without visual interference only V-patterns remained; otherwise, V- and W-patterns were combined. Verbal encoding and visual interference explain when W-patterns can be expected alongside V-patterns and thus can help in resolving different performance patterns in a wide range of experiments.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adolescent
  • Adult
  • Female
  • Humans
  • Male
  • Memory*
  • Models, Theoretical
  • Speech*
  • Visual Perception*
  • Young Adult

Grants and funding

This research was supported by the German Research Foundation (Grant ME 3476/2) and by the National Research Foundation of Korea’s World Class University program (Grant R31-10008). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.