Learning-From-Mistakes Prompting for Indigenous Language Translation

YC Liao, CJ Yu, CY Lin, HF Yun, YH Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
YC Liao, CJ Yu, CY Lin, HF Yun, YH Wang, HM Li, YC Fan
arXiv preprint arXiv:2407.13343, 2024arxiv.org
Using large language models, this paper presents techniques to improve extremely low-
resourced indigenous language translations. Our approaches are grounded in the use of (1)
the presence of a datastore consisting of a limited number of parallel translation
examples,(2) the inherent capabilities of LLMs like GPT-3.5, and (3) a word-level translation
dictionary. We harness the potential of LLMs and in-context learning techniques in such a
setting for using LLMs as universal translators for extremely low-resourced languages. Our …
Using large language models, this paper presents techniques to improve extremely low-resourced indigenous language translations. Our approaches are grounded in the use of (1) the presence of a datastore consisting of a limited number of parallel translation examples, (2) the inherent capabilities of LLMs like GPT-3.5, and (3) a word-level translation dictionary. We harness the potential of LLMs and in-context learning techniques in such a setting for using LLMs as universal translators for extremely low-resourced languages. Our methodology hinges on utilizing LLMs as language compilers for selected language pairs, hypothesizing that they could internalize syntactic structures to facilitate accurate translation. We introduce three techniques: KNNPrompting with Retrieved Prompting Context, Chain-of-Thought Prompting and Learningfrom-Mistakes Prompting, with the last method addressing past errors. The evaluation results suggest that, even with limited corpora, LLMs can effectively translate extremely low-resource languages when paired with proper prompting.
arxiv.org