We report a 53-year-old patient (AWF) who has an acquired deficit of audiovisual speech integration, characterized by a perceived temporal mismatch between speech sounds and the sight of moving lips. AWF was less accurate on an auditory digit span task with vision of a speaker's face as compared to a condition in which no visual information from the lower face was available. He was slower in matching words to pictures when he saw congruent lip movements compared to no lip movements or non-speech lip movements. Unlike normal controls, he showed no McGurk effect. We propose that multisensory binding of audiovisual language cues can be selectively disrupted.