Applicability of large language models and generative models for legal case judgement summarization

A Deroy, K Ghosh, S Ghosh - Artificial Intelligence and Law, 2024 - Springer
Artificial Intelligence and Law, 2024Springer
Automatic summarization of legal case judgements, which are known to be long and
complex, has traditionally been tried via extractive summarization models. In recent years,
generative models including abstractive summarization models and Large language models
(LLMs) have gained huge popularity. In this paper, we explore the applicability of such
models for legal case judgement summarization. We applied various domain-specific
abstractive summarization models and general-domain LLMs as well as extractive …
Abstract
Automatic summarization of legal case judgements, which are known to be long and complex, has traditionally been tried via extractive summarization models. In recent years, generative models including abstractive summarization models and Large language models (LLMs) have gained huge popularity. In this paper, we explore the applicability of such models for legal case judgement summarization. We applied various domain-specific abstractive summarization models and general-domain LLMs as well as extractive summarization models over two sets of legal case judgements – from the United Kingdom (UK) Supreme Court and the Indian Supreme Court – and evaluated the quality of the generated summaries. We also perform experiments on a third dataset of legal documents of a different type – Government reports from the United States. Results show that abstractive summarization models and LLMs generally perform better than the extractive methods as per traditional metrics for evaluating summary quality. However, detailed investigation shows the presence of inconsistencies and hallucinations in the outputs of the generative models, and we explore ways to reduce the hallucinations and inconsistencies in the summaries. Overall, the investigation suggests that further improvements are needed to enhance the reliability of abstractive models and LLMs for legal case judgement summarization. At present, a human-in-the-loop technique is more suitable for performing manual checks to identify inconsistencies in the generated summaries.
Springer