Improving Self-supervised Pre-training using Accent-Specific Codebooks

D Prabhu, A Gupta, O Nitsure, P Jyothi… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2407.03734, 2024arxiv.org
Speech accents present a serious challenge to the performance of state-of-the-art end-to-
end Automatic Speech Recognition (ASR) systems. Even with self-supervised learning and
pre-training of ASR models, accent invariance is seldom achieved. In this work, we propose
an accent-aware adaptation technique for self-supervised learning that introduces a
trainable set of accent-specific codebooks to the self-supervised architecture. These
learnable codebooks enable the model to capture accent specific information during pre …
Speech accents present a serious challenge to the performance of state-of-the-art end-to-end Automatic Speech Recognition (ASR) systems. Even with self-supervised learning and pre-training of ASR models, accent invariance is seldom achieved. In this work, we propose an accent-aware adaptation technique for self-supervised learning that introduces a trainable set of accent-specific codebooks to the self-supervised architecture. These learnable codebooks enable the model to capture accent specific information during pre-training, that is further refined during ASR finetuning. On the Mozilla Common Voice dataset, our proposed approach outperforms all other accent-adaptation approaches on both seen and unseen English accents, with up to 9% relative reduction in word error rate (WER).
arxiv.org