Audio-visual neural syntax acquisition

CIJ Lai, F Shi, P Peng, Y Kim, K Gimpel… - 2023 IEEE Automatic …, 2023 - ieeexplore.ieee.org
2023 IEEE Automatic Speech Recognition and Understanding Workshop …, 2023ieeexplore.ieee.org
We study phrase structure induction from visually-grounded speech. The core idea is to first
segment the speech waveform into sequences of word segments, and subsequently induce
phrase structure using the inferred segment-level continuous representations. We present
the Audio-Visual Neural Syntax Learner (AV-NSL) that learns phrase structure by listening to
audio and looking at images, without ever being exposed to text. By training on paired
images and spoken captions, AV-NSL exhibits the capability to infer meaningful phrase …
We study phrase structure induction from visually-grounded speech. The core idea is to first segment the speech waveform into sequences of word segments, and subsequently induce phrase structure using the inferred segment-level continuous representations. We present the Audio-Visual Neural Syntax Learner (AV-NSL) that learns phrase structure by listening to audio and looking at images, without ever being exposed to text. By training on paired images and spoken captions, AV-NSL exhibits the capability to infer meaningful phrase structures that are comparable to those derived by naturally-supervised text parsers, for both English and German. Our findings extend prior work in unsupervised language acquisition from speech and grounded grammar induction, and present one approach to bridge the gap between the two topics.
ieeexplore.ieee.org
Showing the best result for this search. See all results