Steering llama 2 via contrastive activation addition

N Rimsky, N Gabrieli, J Schulz, M Tong… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2312.06681, 2023arxiv.org
We introduce Contrastive Activation Addition (CAA), an innovative method for steering
language models by modifying activations during their forward passes. CAA
computes``steering vectors''by averaging the difference in residual stream activations
between pairs of positive and negative examples of a particular behavior such as factual
versus hallucinatory responses. During inference, these steering vectors are added at all
token positions after the user's prompt with either a positive or negative coefficient, allowing …
We introduce Contrastive Activation Addition (CAA), an innovative method for steering language models by modifying activations during their forward passes. CAA computes ``steering vectors'' by averaging the difference in residual stream activations between pairs of positive and negative examples of a particular behavior such as factual versus hallucinatory responses. During inference, these steering vectors are added at all token positions after the user's prompt with either a positive or negative coefficient, allowing precise control over the degree of the targeted behavior. We evaluate CAA's effectiveness on Llama 2 Chat using both multiple-choice behavioral question datasets and open-ended generation tasks. We demonstrate that CAA significantly alters model behavior, outperforms traditional methods like finetuning and few-shot prompting, and minimally reduces capabilities. Moreover, by employing various activation space interpretation methods, we gain deeper insights into CAA's mechanisms. CAA both accurately steers model outputs and also sheds light on how high-level concepts are represented in Large Language Models (LLMs).
arxiv.org