Romanization Encoding For Multilingual ASR

W Ding, F Jia, H Xu, Y Xi, J Lai, B Ginsburg - arXiv preprint arXiv …, 2024 - arxiv.org
W Ding, F Jia, H Xu, Y Xi, J Lai, B Ginsburg
arXiv preprint arXiv:2407.04368, 2024arxiv.org
We introduce romanization encoding for script-heavy languages to optimize multilingual and
code-switching Automatic Speech Recognition (ASR) systems. By adopting romanization
encoding alongside a balanced concatenated tokenizer within a FastConformer-RNNT
framework equipped with a Roman2Char module, we significantly reduce vocabulary and
output dimensions, enabling larger training batches and reduced memory consumption. Our
method decouples acoustic modeling and language modeling, enhancing the flexibility and …
We introduce romanization encoding for script-heavy languages to optimize multilingual and code-switching Automatic Speech Recognition (ASR) systems. By adopting romanization encoding alongside a balanced concatenated tokenizer within a FastConformer-RNNT framework equipped with a Roman2Char module, we significantly reduce vocabulary and output dimensions, enabling larger training batches and reduced memory consumption. Our method decouples acoustic modeling and language modeling, enhancing the flexibility and adaptability of the system. In our study, applying this method to Mandarin-English ASR resulted in a remarkable 63.51% vocabulary reduction and notable performance gains of 13.72% and 15.03% on SEAME code-switching benchmarks. Ablation studies on Mandarin-Korean and Mandarin-Japanese highlight our method's strong capability to address the complexities of other script-heavy languages, paving the way for more versatile and effective multilingual ASR systems.
arxiv.org