Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition

S Kim, K Jang, S Bae, H Kim, SY Yun - arXiv preprint arXiv:2407.03563, 2024 - arxiv.org
arXiv preprint arXiv:2407.03563, 2024arxiv.org
Audio-visual speech recognition (AVSR) aims to transcribe human speech using both audio
and video modalities. In practical environments with noise-corrupted audio, the role of video
information becomes crucial. However, prior works have primarily focused on enhancing
audio features in AVSR, overlooking the importance of video features. In this study, we
strengthen the video features by learning three temporal dynamics in video data: context
order, playback direction, and the speed of video frames. Cross-modal attention modules …
Audio-visual speech recognition (AVSR) aims to transcribe human speech using both audio and video modalities. In practical environments with noise-corrupted audio, the role of video information becomes crucial. However, prior works have primarily focused on enhancing audio features in AVSR, overlooking the importance of video features. In this study, we strengthen the video features by learning three temporal dynamics in video data: context order, playback direction, and the speed of video frames. Cross-modal attention modules are introduced to enrich video features with audio information so that speech variability can be taken into account when training on the video temporal dynamics. Based on our approach, we achieve the state-of-the-art performance on the LRS2 and LRS3 AVSR benchmarks for the noise-dominant settings. Our approach excels in scenarios especially for babble and speech noise, indicating the ability to distinguish the speech signal that should be recognized from lip movements in the video modality. We support the validity of our methodology by offering the ablation experiments for the temporal dynamics losses and the cross-modal attention architecture design.
arxiv.org