CATT: Character-based Arabic Tashkeel Transformer

F Alasmary, O Zaafarani, A Ghannam - arXiv preprint arXiv:2407.03236, 2024 - arxiv.org
arXiv preprint arXiv:2407.03236, 2024arxiv.org
Tashkeel, or Arabic Text Diacritization (ATD), greatly enhances the comprehension of Arabic
text by removing ambiguity and minimizing the risk of misinterpretations caused by its
absence. It plays a crucial role in improving Arabic text processing, particularly in
applications such as text-to-speech and machine translation. This paper introduces a new
approach to training ATD models. First, we finetuned two transformers, encoder-only and
encoder-decoder, that were initialized from a pretrained character-based BERT. Then, we …
Tashkeel, or Arabic Text Diacritization (ATD), greatly enhances the comprehension of Arabic text by removing ambiguity and minimizing the risk of misinterpretations caused by its absence. It plays a crucial role in improving Arabic text processing, particularly in applications such as text-to-speech and machine translation. This paper introduces a new approach to training ATD models. First, we finetuned two transformers, encoder-only and encoder-decoder, that were initialized from a pretrained character-based BERT. Then, we applied the Noisy-Student approach to boost the performance of the best model. We evaluated our models alongside 11 commercial and open-source models using two manually labeled benchmark datasets: WikiNews and our CATT dataset. Our findings show that our top model surpasses all evaluated models by relative Diacritic Error Rates (DERs) of 30.83\% and 35.21\% on WikiNews and CATT, respectively, achieving state-of-the-art in ATD. In addition, we show that our model outperforms GPT-4-turbo on CATT dataset by a relative DER of 9.36\%. We open-source our CATT models and benchmark dataset for the research community\footnote{https://github.com/abjadai/catt}.
arxiv.org