ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.08037
41
59

Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models

15 December 2022
Bernd Bohnet
Vinh Q. Tran
Pat Verga
Roee Aharoni
D. Andor
Livio Baldini Soares
Massimiliano Ciaramita
Jacob Eisenstein
Kuzman Ganchev
Jonathan Herzig
Kai Hui
Tom Kwiatkowski
Ji Ma
Jianmo Ni
Lierni Sestorain Saralegui
Tal Schuster
William W. Cohen
Michael Collins
Dipanjan Das
Donald Metzler
Slav Petrov
Kellie Webster
ArXivPDFHTML
Abstract

Large language models (LLMs) have shown impressive results while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribute the text that it generates is likely to be crucial in this setting. We formulate and study Attributed QA as a key first step in the development of attributed LLMs. We propose a reproducible evaluation framework for the task and benchmark a broad set of architectures. We take human annotations as a gold standard and show that a correlated automatic metric is suitable for development. Our experimental work gives concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third (How to build LLMs with attribution?).

View on arXiv
Comments on this paper