DART: Deep Adversarial Automated Red Teaming for LLM Safety

B Jiang, Y Jing, T Shen, Q Yang, D Xiong - arXiv preprint arXiv …, 2024 - arxiv.org
B Jiang, Y Jing, T Shen, Q Yang, D Xiong
arXiv preprint arXiv:2407.03876, 2024arxiv.org
Manual Red teaming is a commonly-used method to identify vulnerabilities in large
language models (LLMs), which, is costly and unscalable. In contrast, automated red
teaming uses a Red LLM to automatically generate adversarial prompts to the Target LLM,
offering a scalable way for safety vulnerability detection. However, the difficulty of building a
powerful automated Red LLM lies in the fact that the safety vulnerabilities of the Target LLM
are dynamically changing with the evolution of the Target LLM. To mitigate this issue, we …
Manual Red teaming is a commonly-used method to identify vulnerabilities in large language models (LLMs), which, is costly and unscalable. In contrast, automated red teaming uses a Red LLM to automatically generate adversarial prompts to the Target LLM, offering a scalable way for safety vulnerability detection. However, the difficulty of building a powerful automated Red LLM lies in the fact that the safety vulnerabilities of the Target LLM are dynamically changing with the evolution of the Target LLM. To mitigate this issue, we propose a Deep Adversarial Automated Red Teaming (DART) framework in which the Red LLM and Target LLM are deeply and dynamically interacting with each other in an iterative manner. In each iteration, in order to generate successful attacks as many as possible, the Red LLM not only takes into account the responses from the Target LLM, but also adversarially adjust its attacking directions by monitoring the global diversity of generated attacks across multiple iterations. Simultaneously, to explore dynamically changing safety vulnerabilities of the Target LLM, we allow the Target LLM to enhance its safety via an active learning based data selection mechanism. Experimential results demonstrate that DART significantly reduces the safety risk of the target LLM. For human evaluation on Anthropic Harmless dataset, compared to the instruction-tuning target LLM, DART eliminates the violation risks by 53.4\%. We will release the datasets and codes of DART soon.
arxiv.org