Visual search is aided by previous knowledge regarding distinguishing features and probable locations of a sought-after target. However, how the human brain represents and integrates concurrent feature-based and spatial expectancies to guide visual search is currently not well understood. Specifically, it is not clear whether spatial and feature-based search information is initially represented in anatomically segregated regions, nor at which level of processing expectancies regarding target features and locations may be integrated. To address these questions, we independently and parametrically varied the degree of spatial and feature-based (color) cue information concerning the identity of an upcoming visual search target while recording blood oxygenation level-dependent (BOLD) responses in human subjects. Search performance improved with the amount of spatial and feature-based cue information, and cue-related BOLD responses showed that, during preparation for visual search, spatial and feature cue information were represented additively in shared frontal, parietal, and cingulate regions. These data show that representations of spatial and feature-based search information are integrated in source regions of top-down biasing and oculomotor planning before search onset. The purpose of this anticipatory integration could lie with the generation of a "top-down salience map," a search template of primed target locations and features. Our results show that this role may be served by the intraparietal sulcus, which additively integrated a spatially specific activation gain in relation to spatial cue information with a spatially global activation gain in relation to feature cue information.