Enhancing Large Language Models for Secure Code Generation: A Dataset-driven Study on Vulnerability Mitigation

J Wang, L Cao, X Luo, Z Zhou, J Xie, A Jatowt… - arXiv preprint arXiv …, 2023 - arxiv.org
J Wang, L Cao, X Luo, Z Zhou, J Xie, A Jatowt, Y Cai
arXiv preprint arXiv:2310.16263, 2023arxiv.org
Large language models (LLMs) have brought significant advancements to code generation,
benefiting both novice and experienced developers. However, their training using
unsanitized data from open-source repositories, like GitHub, introduces the risk of
inadvertently propagating security vulnerabilities. To effectively mitigate this concern, this
paper presents a comprehensive study focused on evaluating and enhancing code LLMs
from a software security perspective. We introduce SecuCoGen\footnote {SecuCoGen has …
Large language models (LLMs) have brought significant advancements to code generation, benefiting both novice and experienced developers. However, their training using unsanitized data from open-source repositories, like GitHub, introduces the risk of inadvertently propagating security vulnerabilities. To effectively mitigate this concern, this paper presents a comprehensive study focused on evaluating and enhancing code LLMs from a software security perspective. We introduce SecuCoGen\footnote{SecuCoGen has been uploaded as supplemental material and will be made publicly available after publication.}, a meticulously curated dataset targeting 21 critical vulnerability types. SecuCoGen comprises 180 samples and serves as the foundation for conducting experiments on three crucial code-related tasks: code generation, code repair and vulnerability classification, with a strong emphasis on security. Our experimental results reveal that existing models often overlook security concerns during code generation, leading to the generation of vulnerable code. To address this, we propose effective approaches to mitigate the security vulnerabilities and enhance the overall robustness of code generated by LLMs. Moreover, our study identifies weaknesses in existing models' ability to repair vulnerable code, even when provided with vulnerability information. Additionally, certain vulnerability types pose challenges for the models, hindering their performance in vulnerability classification. Based on these findings, we believe our study will have a positive impact on the software engineering community, inspiring the development of improved methods for training and utilizing LLMs, thereby leading to safer and more trustworthy model deployment.
arxiv.org