M -- A Diverse Benchmark to Assess the Performance of Large Multimodal Models Across Multilingual and Multicultural Vision-Language Tasks

F Schneider, S Sitaram - arXiv preprint arXiv:2407.03791, 2024 - arxiv.org
arXiv preprint arXiv:2407.03791, 2024arxiv.org
Since the release of ChatGPT, the field of Natural Language Processing has experienced
rapid advancements, particularly in Large Language Models (LLMs) and their multimodal
counterparts, Large Multimodal Models (LMMs). Despite their impressive capabilities, LLMs
often exhibit significant performance disparities across different languages and cultural
contexts, as demonstrated by various text-only benchmarks. However, current research
lacks such benchmarks for multimodal visio-linguistic settings. This work fills this gap by …
Since the release of ChatGPT, the field of Natural Language Processing has experienced rapid advancements, particularly in Large Language Models (LLMs) and their multimodal counterparts, Large Multimodal Models (LMMs). Despite their impressive capabilities, LLMs often exhibit significant performance disparities across different languages and cultural contexts, as demonstrated by various text-only benchmarks. However, current research lacks such benchmarks for multimodal visio-linguistic settings. This work fills this gap by introducing M5, the first comprehensive benchmark designed to evaluate LMMs on diverse vision-language tasks within a multilingual and multicultural context. M5 includes eight datasets covering five tasks and languages, with a focus on underrepresented languages and culturally diverse images. Furthermore, we introduce two novel datasets, M5-VGR and M5-VLOD, including a new Visio-Linguistic Outlier Detection task, in which all evaluated open-source models fail to significantly surpass the random baseline. Through extensive evaluation and analyses, we highlight substantial task-agnostic performance disparities between high- and low-resource languages. Moreover, we show that larger models do not necessarily outperform smaller ones in a multilingual setting.
arxiv.org