Multi-Convformer: Extending Conformer with Multiple Convolution Kernels

D Prabhu, Y Peng, P Jyothi, S Watanabe - arXiv preprint arXiv:2407.03718, 2024 - arxiv.org
arXiv preprint arXiv:2407.03718, 2024arxiv.org
Convolutions have become essential in state-of-the-art end-to-end Automatic Speech
Recognition~(ASR) systems due to their efficient modelling of local context. Notably, its use
in Conformers has led to superior performance compared to vanilla Transformer-based ASR
systems. While components other than the convolution module in the Conformer have been
reexamined, altering the convolution module itself has been far less explored. Towards this,
we introduce Multi-Convformer that uses multiple convolution kernels within the convolution …
Convolutions have become essential in state-of-the-art end-to-end Automatic Speech Recognition~(ASR) systems due to their efficient modelling of local context. Notably, its use in Conformers has led to superior performance compared to vanilla Transformer-based ASR systems. While components other than the convolution module in the Conformer have been reexamined, altering the convolution module itself has been far less explored. Towards this, we introduce Multi-Convformer that uses multiple convolution kernels within the convolution module of the Conformer in conjunction with gating. This helps in improved modeling of local dependencies at varying granularities. Our model rivals existing Conformer variants such as CgMLP and E-Branchformer in performance, while being more parameter efficient. We empirically compare our approach with Conformer and its variants across four different datasets and three different modelling paradigms and show up to 8% relative word error rate~(WER) improvements.
arxiv.org