48
1

CUB: Benchmarking Context Utilisation Techniques for Language Models

Main:8 Pages
5 Figures
Bibliography:4 Pages
17 Tables
Appendix:15 Pages
Abstract

Incorporating external knowledge is crucial for knowledge-intensive tasks, such as question answering and fact checking. However, language models (LMs) may ignore relevant information that contradicts outdated parametric memory or be distracted by irrelevant contexts. While many context utilisation manipulation techniques (CMTs) that encourage or suppress context utilisation have recently been proposed to alleviate these issues, few have seen systematic comparison. In this paper, we develop CUB (Context Utilisation Benchmark) to help practitioners within retrieval-augmented generation (RAG) identify the best CMT for their needs. CUB allows for rigorous testing on three distinct context types, observed to capture key challenges in realistic context utilisation scenarios. With this benchmark, we evaluate seven state-of-the-art methods, representative of the main categories of CMTs, across three diverse datasets and tasks, applied to nine LMs. Our results show that most of the existing CMTs struggle to handle the full set of types of contexts that may be encountered in real-world retrieval-augmented scenarios. Moreover, we find that many CMTs display an inflated performance on simple synthesised datasets, compared to more realistic datasets with naturally occurring samples. Altogether, our results show the need for holistic tests of CMTs and the development of CMTs that can handle multiple context types.

View on arXiv
@article{hagström2025_2505.16518,
  title={ CUB: Benchmarking Context Utilisation Techniques for Language Models },
  author={ Lovisa Hagström and Youna Kim and Haeun Yu and Sang-goo Lee and Richard Johansson and Hyunsoo Cho and Isabelle Augenstein },
  journal={arXiv preprint arXiv:2505.16518},
  year={ 2025 }
}
Comments on this paper