SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shots

W Wang, H Yang, C Meinel - arXiv preprint arXiv:2406.14208, 2024 - arxiv.org
arXiv preprint arXiv:2406.14208, 2024arxiv.org
Previous studies have shown that demonstrations can significantly help Large Language
Models (LLMs) perform better on the given tasks. However, this so-called In-Context
Learning (ICL) ability is very sensitive to the presenting context, and often dozens of
demonstrations are needed. In this work, we investigate if we can reduce the shot number
while still maintaining a competitive performance. We present SeCoKD, a self-Knowledge
Distillation (KD) training framework that aligns the student model with a heavily prompted …
Previous studies have shown that demonstrations can significantly help Large Language Models (LLMs ) perform better on the given tasks. However, this so-called In-Context Learning ( ICL ) ability is very sensitive to the presenting context, and often dozens of demonstrations are needed. In this work, we investigate if we can reduce the shot number while still maintaining a competitive performance. We present SeCoKD, a self-Knowledge Distillation ( KD ) training framework that aligns the student model with a heavily prompted variation, thereby increasing the utilization of a single demonstration. We experiment with the SeCoKD across three LLMs and six benchmarks focusing mainly on reasoning tasks. Results show that our method outperforms the base model and Supervised Fine-tuning ( SFT ), especially in zero-shot and one-shot settings by 30% and 10%, respectively. Moreover, SeCoKD brings little negative artifacts when evaluated on new tasks, which is more robust than Supervised Fine-tuning.
arxiv.org