Meta-prompting Optimized Retrieval-augmented Generation

J Rodrigues, A Branco - arXiv preprint arXiv:2407.03955, 2024 - arxiv.org
arXiv preprint arXiv:2407.03955, 2024arxiv.org
Retrieval-augmented generation resorts to content retrieved from external sources in order
to leverage the performance of large language models in downstream tasks. The excessive
volume of retrieved content, the possible dispersion of its parts, or their out of focus range
may happen nevertheless to eventually have a detrimental rather than an incremental effect.
To mitigate this issue and improve retrieval-augmented generation, we propose a method to
refine the retrieved content before it is included in the prompt by resorting to meta-prompting …
Retrieval-augmented generation resorts to content retrieved from external sources in order to leverage the performance of large language models in downstream tasks. The excessive volume of retrieved content, the possible dispersion of its parts, or their out of focus range may happen nevertheless to eventually have a detrimental rather than an incremental effect. To mitigate this issue and improve retrieval-augmented generation, we propose a method to refine the retrieved content before it is included in the prompt by resorting to meta-prompting optimization. Put to empirical test with the demanding multi-hop question answering task from the StrategyQA dataset, the evaluation results indicate that this method outperforms a similar retrieval-augmented system but without this method by over 30%.
arxiv.org