Promises and perils of using Transformer-based models for SE research

Neural Netw. 2024 Dec 24:184:107067. doi: 10.1016/j.neunet.2024.107067. Online ahead of print.

Abstract

Many Transformer-based pre-trained models for code have been developed and applied to code-related tasks. In this paper, we analyze 519 papers published on this topic during 2017-2023, examine the suitability of model architectures for different tasks, summarize their resource consumption, and look at the generalization ability of models on different datasets. We examine three representative pre-trained models for code: CodeBERT, CodeGPT, and CodeT5, and conduct experiments on the four topmost targeted software engineering tasks from the literature: Bug Fixing, Bug Detection, Code Summarization, and Code Search. We make four important empirical contributions to the field. First, we demonstrate that encoder-only models (CodeBERT) can outperform encoder-decoder models for general-purpose coding tasks, and showcase the capability of decoder-only models (CodeGPT) for certain generation tasks. Second, we study the most frequently used model-task combinations in the literature and find that less popular models can provide higher performance. Third, we find that CodeBERT is efficient in understanding tasks while CodeT5's efficiency is unreliable on generation tasks due to its high resource consumption. Fourth, we report on poor model generalization for the most popular benchmarks and datasets on Bug Fixing and Code Summarization tasks. We frame our contributions in terms of promises and perils, and document the numerous practical issues in advancing future research on transformer-based models for code-related tasks.

Keywords: CodeBERT; CodeGPT; CodeT5; Transformer-based pre-trained models.

Publication types

  • Review