Speechreading-gathering speech information from talkers' faces-supports speech perception when speech acoustics are degraded. Benefitting from speechreading, however, requires listeners to visually fixate talkers during face-to-face interactions. The purpose of this study is to test the hypothesis that preschool-aged children allocate their eye gaze to a talker when speech acoustics are degraded. We implemented a looking-while-listening paradigm to quantify children's eye gaze to an unfamiliar female talker and two images of familiar objects presented on a screen while the children listened to speech. We tested 31 children (12 girls), ages 26-48 months, who had normal hearing (NH group, n = 19) or bilateral sensorineural hearing loss and used hearing devices (D/HH group, n = 12). Children's eye gaze was video-recorded as the talker verbally labeled one of the images, either in quiet or in the presence of an unfamiliar two-talker male speech masker. Children's eye gaze to the target image, distractor image, and female talker was coded every 33 ms off-line by trained observers. Bootstrapped differences of time series (BDOTS) analyses and ternary plots were used to determine differences in visual fixations of the talker between listening conditions in the NH and D/HH groups. Results suggest that the NH group visually fixated the talker more in the masker condition than in quiet. We did not observe statistically discernable differences in visual fixations of the talker between the listening conditions for the D/HH group. Gaze patterns of the NH group in the masker condition looked like gaze patterns of the D/HH group.
Keywords: Eye gaze; Hearing loss; Multisensory processing.
© 2025. The Psychonomic Society, Inc.