Humans are highly skilled in controlling their reaching movements, making fast and task-dependent movement corrections to unforeseen perturbations. To guide these corrections, the neural control system requires a continuous, instantaneous estimate of the current state of the arm and body in the world. According to Optimal Feedback Control theory, this estimate is multimodal and constructed based on the integration of forward motor predictions and sensory feedback, such as proprioceptive, visual and vestibular information, modulated by context, and shaped by past experience. But how can a multimodal estimate drive fast movement corrections, given that the involved sensory modalities have different processing delays, different coordinate representations, and different noise levels? We develop the hypothesis that the earliest online movement corrections are based on multiple single modality state estimates rather than one combined multimodal estimate. We review studies that have investigated online multimodal integration for reach control and offer suggestions for experiments to test for the existence of intramodal state estimates. If proven true, the framework of Optimal Feedback Control needs to be extended with a stage of intramodal state estimation, serving to drive short-latency movement corrections.
Keywords: feedback control; multimodal integration; online movement control; state estimation; vestibular organ.