The lateral intraparietal area (area LIP) contains a multimodal representation of extra-personal space. To further examine this representation, we trained rhesus monkeys on the predictive-cueing task. During this task, monkeys shifted their gaze to a visual target whose location was predicted by the location of an auditory or visual cue. We found that, when the sensory cue was at the same location as the visual target, the monkeys' mean saccadic latency was faster than when the sensory cue and the visual target were at different locations. This difference in mean saccadic latency was the same for both auditory cues and visual cues. Despite the fact that the monkeys used auditory and visual cues in a similar fashion, LIP neurons responded more to visual cues than to auditory cues. This modality-dependent activity was also seen during auditory and visual memory-guided saccades but to a significantly greater extent than during the predictive-cueing task. Additionally, we found that the firing rate of LIP neurons was inversely correlated with saccadic latency. This study indicates further that modality-dependent differences in LIP activity do not simply reflect differences in sensory processing but also reflect the cognitive and behavioral requirements of a task.