ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.14251
  4. Cited By
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long
  Form Text Generation

FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation

23 May 2023
Sewon Min
Kalpesh Krishna
Xinxi Lyu
M. Lewis
Wen-tau Yih
Pang Wei Koh
Mohit Iyyer
Luke Zettlemoyer
Hannaneh Hajishirzi
    HILM
    ALM
ArXivPDFHTML

Papers citing "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"

50 / 455 papers shown
Title
Factcheck-Bench: Fine-Grained Evaluation Benchmark for Automatic
  Fact-checkers
Factcheck-Bench: Fine-Grained Evaluation Benchmark for Automatic Fact-checkers
Yuxia Wang
Revanth Gangi Reddy
Zain Muhammad Mujahid
Arnav Arora
Aleksandr Rubashevskii
...
Nadav Borenstein
Aditya Pillai
Isabelle Augenstein
Iryna Gurevych
Preslav Nakov
HILM
43
33
0
15 Nov 2023
Fine-tuning Language Models for Factuality
Fine-tuning Language Models for Factuality
Katherine Tian
Eric Mitchell
Huaxiu Yao
Christopher D. Manning
Chelsea Finn
KELM
HILM
SyDa
19
167
0
14 Nov 2023
Extrinsically-Focused Evaluation of Omissions in Medical Summarization
Extrinsically-Focused Evaluation of Omissions in Medical Summarization
Elliot Schumacher
Daniel Rosenthal
Varun Nair
Luladay Price
Geoffrey Tso
Anitha Kannan
19
2
0
14 Nov 2023
LLatrieval: LLM-Verified Retrieval for Verifiable Generation
LLatrieval: LLM-Verified Retrieval for Verifiable Generation
Xiaonan Li
Changtai Zhu
Linyang Li
Zhangyue Yin
Tianxiang Sun
Xipeng Qiu
RALM
40
25
0
14 Nov 2023
A Survey on Hallucination in Large Language Models: Principles,
  Taxonomy, Challenges, and Open Questions
A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions
Lei Huang
Weijiang Yu
Weitao Ma
Weihong Zhong
Zhangyin Feng
...
Qianglong Chen
Weihua Peng
Xiaocheng Feng
Bing Qin
Ting Liu
LRM
HILM
44
732
0
09 Nov 2023
SEMQA: Semi-Extractive Multi-Source Question Answering
SEMQA: Semi-Extractive Multi-Source Question Answering
Tal Schuster
Á. Lelkes
Haitian Sun
Jai Gupta
Jonathan Berant
W. Cohen
Donald Metzler
36
13
0
08 Nov 2023
Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic
  Representations
Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations
Sihao Chen
Hongming Zhang
Tong Chen
Ben Zhou
Wenhao Yu
Dian Yu
Baolin Peng
Hongwei Wang
Dan Roth
Dong Yu
SSL
23
13
0
07 Nov 2023
A Survey of Large Language Models Attribution
A Survey of Large Language Models Attribution
Dongfang Li
Zetian Sun
Xinshuo Hu
Zhenyu Liu
Ziyang Chen
Baotian Hu
Aiguo Wu
Min Zhang
HILM
21
49
0
07 Nov 2023
FAITHSCORE: Evaluating Hallucinations in Large Vision-Language Models
FAITHSCORE: Evaluating Hallucinations in Large Vision-Language Models
Liqiang Jing
Ruosen Li
Yunmo Chen
Mengzhao Jia
Xinya Du
MLLM
24
7
0
02 Nov 2023
LitCab: Lightweight Language Model Calibration over Short- and Long-form
  Responses
LitCab: Lightweight Language Model Calibration over Short- and Long-form Responses
Xin Liu
Muhammad Khalifa
Lu Wang
ALM
31
18
0
30 Oct 2023
Davidsonian Scene Graph: Improving Reliability in Fine-grained
  Evaluation for Text-to-Image Generation
Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation
Jaemin Cho
Yushi Hu
Roopal Garg
Peter Anderson
Ranjay Krishna
Jason Baldridge
Mohit Bansal
Jordi Pont-Tuset
Su Wang
EGVM
33
66
0
27 Oct 2023
Language Models Hallucinate, but May Excel at Fact Verification
Language Models Hallucinate, but May Excel at Fact Verification
Jian Guan
Jesse Dodge
David Wadden
Minlie Huang
Hao Peng
LRM
HILM
34
28
0
23 Oct 2023
Large Language Models Help Humans Verify Truthfulness -- Except When
  They Are Convincingly Wrong
Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Chenglei Si
Navita Goyal
Sherry Tongshuang Wu
Chen Zhao
Shi Feng
Hal Daumé
Jordan L. Boyd-Graber
LRM
47
39
0
19 Oct 2023
Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical
  Foundation Model: A Computational Analysis
Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical Foundation Model: A Computational Analysis
Yaxin Fan
Feng Jiang
Benyou Wang
Peifeng Li
Haizhou Li
21
1
0
18 Oct 2023
Self-RAG: Learning to Retrieve, Generate, and Critique through
  Self-Reflection
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Akari Asai
Zeqiu Wu
Yizhong Wang
Avirup Sil
Hannaneh Hajishirzi
RALM
176
638
0
17 Oct 2023
KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large
  Language Models
KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models
Yuyang Bai
Shangbin Feng
Vidhisha Balachandran
Zhaoxuan Tan
Shiqi Lou
Tianxing He
Yulia Tsvetkov
ELM
40
2
0
15 Oct 2023
KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level
  Hallucination Detection
KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level Hallucination Detection
Sehyun Choi
Tianqing Fang
Zhaowei Wang
Yangqiu Song
35
32
0
13 Oct 2023
Prometheus: Inducing Fine-grained Evaluation Capability in Language
  Models
Prometheus: Inducing Fine-grained Evaluation Capability in Language Models
Seungone Kim
Jamin Shin
Yejin Cho
Joel Jang
Shayne Longpre
...
Sangdoo Yun
Seongjin Shin
Sungdong Kim
James Thorne
Minjoon Seo
ALM
LM&MA
ELM
37
213
0
12 Oct 2023
Survey on Factuality in Large Language Models: Knowledge, Retrieval and
  Domain-Specificity
Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity
Cunxiang Wang
Xiaoze Liu
Yuanhao Yue
Xiangru Tang
Tianhang Zhang
...
Linyi Yang
Jindong Wang
Xing Xie
Zheng-Wei Zhang
Yue Zhang
HILM
KELM
51
184
0
11 Oct 2023
Beyond Factuality: A Comprehensive Evaluation of Large Language Models
  as Knowledge Generators
Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
Liang Chen
Yang Deng
Yatao Bian
Zeyu Qin
Bingzhe Wu
Tat-Seng Chua
Kam-Fai Wong
HILM
ELM
60
43
0
11 Oct 2023
Teaching Language Models to Hallucinate Less with Synthetic Tasks
Teaching Language Models to Hallucinate Less with Synthetic Tasks
Erik Jones
Hamid Palangi
Clarisse Simoes
Varun Chandrasekaran
Subhabrata Mukherjee
Arindam Mitra
Ahmed Hassan Awadallah
Ece Kamar
HILM
21
24
0
10 Oct 2023
Factuality Challenges in the Era of Large Language Models
Factuality Challenges in the Era of Large Language Models
Isabelle Augenstein
Timothy Baldwin
Meeyoung Cha
Tanmoy Chakraborty
Giovanni Luca Ciampaglia
...
Rubén Míguez
Preslav Nakov
Dietram A. Scheufele
Shivam Sharma
Giovanni Zagni
HILM
36
41
0
08 Oct 2023
Knowledge Crosswords: Geometric Knowledge Reasoning with Large Language
  Models
Knowledge Crosswords: Geometric Knowledge Reasoning with Large Language Models
Wenxuan Ding
Shangbin Feng
Yuhan Liu
Zhaoxuan Tan
Vidhisha Balachandran
Tianxing He
Yulia Tsvetkov
LRM
36
2
0
02 Oct 2023
BooookScore: A systematic exploration of book-length summarization in
  the era of LLMs
BooookScore: A systematic exploration of book-length summarization in the era of LLMs
Yapei Chang
Kyle Lo
Tanya Goyal
Mohit Iyyer
ALM
21
106
0
01 Oct 2023
FELM: Benchmarking Factuality Evaluation of Large Language Models
FELM: Benchmarking Factuality Evaluation of Large Language Models
Shiqi Chen
Yiran Zhao
Jinghan Zhang
Ethan Chern
Siyang Gao
Pengfei Liu
Junxian He
HILM
38
33
0
01 Oct 2023
STRONG -- Structure Controllable Legal Opinion Summary Generation
STRONG -- Structure Controllable Legal Opinion Summary Generation
Yang Zhong
Diane Litman
ELM
AILaw
30
1
0
29 Sep 2023
Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI
Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI
Muhammad Aurangzeb Ahmad
Ilker Yaramis
Taposh Dutta Roy
LM&MA
HILM
27
34
0
26 Sep 2023
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of
  Language Models
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Mert Yuksekgonul
Varun Chandrasekaran
Erik Jones
Suriya Gunasekar
Ranjita Naik
Hamid Palangi
Ece Kamar
Besmira Nushi
HILM
26
40
0
26 Sep 2023
Large Language Model Alignment: A Survey
Large Language Model Alignment: A Survey
Tianhao Shen
Renren Jin
Yufei Huang
Chuang Liu
Weilong Dong
Zishan Guo
Xinwei Wu
Yan Liu
Deyi Xiong
LM&MA
19
177
0
26 Sep 2023
Ragas: Automated Evaluation of Retrieval Augmented Generation
Ragas: Automated Evaluation of Retrieval Augmented Generation
ES Shahul
Jithin James
Luis Espinosa-Anke
Steven Schockaert
91
177
0
26 Sep 2023
Calibrating LLM-Based Evaluator
Calibrating LLM-Based Evaluator
Yuxuan Liu
Tianchi Yang
Shaohan Huang
Zihan Zhang
Haizhen Huang
Furu Wei
Weiwei Deng
Feng Sun
Qi Zhang
49
31
0
23 Sep 2023
LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive
  Summarisation
LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive Summarisation
Jennifer A Bishop
Qianqian Xie
Sophia Ananiadou
HILM
19
9
0
21 Sep 2023
Chain-of-Verification Reduces Hallucination in Large Language Models
Chain-of-Verification Reduces Hallucination in Large Language Models
S. Dhuliawala
M. Komeili
Jing Xu
Roberta Raileanu
Xian Li
Asli Celikyilmaz
Jason Weston
LRM
HILM
22
177
0
20 Sep 2023
Exploring the impact of low-rank adaptation on the performance,
  efficiency, and regularization of RLHF
Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of RLHF
Simeng Sun
Dhawal Gupta
Mohit Iyyer
27
17
0
16 Sep 2023
ExpertQA: Expert-Curated Questions and Attributed Answers
ExpertQA: Expert-Curated Questions and Attributed Answers
Chaitanya Malaviya
Subin Lee
Sihao Chen
Elizabeth Sieber
Mark Yatskar
Dan Roth
ELM
HILM
31
52
0
14 Sep 2023
Zero-shot Audio Topic Reranking using Large Language Models
Zero-shot Audio Topic Reranking using Large Language Models
Mengjie Qian
Rao Ma
Adian Liusie
Erfan Loweimi
Kate Knill
Mark J. F. Gales
29
1
0
14 Sep 2023
Cognitive Mirage: A Review of Hallucinations in Large Language Models
Cognitive Mirage: A Review of Hallucinations in Large Language Models
Hongbin Ye
Tong Liu
Aijia Zhang
Wei Hua
Weiqiang Jia
HILM
48
77
0
13 Sep 2023
Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges
Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges
Hiba Ahsan
Denis Jered McInerney
Jisoo Kim
Christopher Potter
Geoffrey S. Young
Silvio Amir
Byron C. Wallace
27
12
0
08 Sep 2023
Zero-Resource Hallucination Prevention for Large Language Models
Zero-Resource Hallucination Prevention for Large Language Models
Junyu Luo
Cao Xiao
Fenglong Ma
HILM
29
16
0
06 Sep 2023
Siren's Song in the AI Ocean: A Survey on Hallucination in Large
  Language Models
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Yue Zhang
Yafu Li
Leyang Cui
Deng Cai
Lemao Liu
...
Longyue Wang
A. Luu
Wei Bi
Freda Shi
Shuming Shi
RALM
LRM
HILM
48
522
0
03 Sep 2023
Halo: Estimation and Reduction of Hallucinations in Open-Source Weak
  Large Language Models
Halo: Estimation and Reduction of Hallucinations in Open-Source Weak Large Language Models
Mohamed S. Elaraby
Mengyin Lu
Jacob Dunn
Xueying Zhang
Yu Wang
Shizhu Liu
Pingchuan Tian
Yuping Wang
Yuxuan Wang
HILM
36
26
0
22 Aug 2023
Answering Unseen Questions With Smaller Language Models Using Rationale
  Generation and Dense Retrieval
Answering Unseen Questions With Smaller Language Models Using Rationale Generation and Dense Retrieval
Tim Hartill
Diana Benavides-Prado
Michael Witbrock
Patricia J. Riddle
ReLM
LRM
28
1
0
09 Aug 2023
Automatically Correcting Large Language Models: Surveying the landscape
  of diverse self-correction strategies
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies
Liangming Pan
Michael Stephen Saxon
Wenda Xu
Deepak Nathani
Xinyi Wang
William Yang Wang
KELM
LRM
47
201
0
06 Aug 2023
On the Trustworthiness Landscape of State-of-the-art Generative Models:
  A Survey and Outlook
On the Trustworthiness Landscape of State-of-the-art Generative Models: A Survey and Outlook
Mingyuan Fan
Chengyu Wang
Cen Chen
Yang Liu
Jun Huang
HILM
39
3
0
31 Jul 2023
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill
  Sets
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Seonghyeon Ye
Doyoung Kim
Sungdong Kim
Hyeonbin Hwang
Seungone Kim
Yongrae Jo
James Thorne
Juho Kim
Minjoon Seo
ALM
46
99
0
20 Jul 2023
Generating Benchmarks for Factuality Evaluation of Language Models
Generating Benchmarks for Factuality Evaluation of Language Models
Dor Muhlgay
Ori Ram
Inbal Magar
Yoav Levine
Nir Ratner
Yonatan Belinkov
Omri Abend
Kevin Leyton-Brown
Amnon Shashua
Y. Shoham
HILM
33
91
0
13 Jul 2023
A Survey on Evaluation of Large Language Models
A Survey on Evaluation of Large Language Models
Yu-Chu Chang
Xu Wang
Jindong Wang
Yuanyi Wu
Linyi Yang
...
Yue Zhang
Yi-Ju Chang
Philip S. Yu
Qian Yang
Xingxu Xie
ELM
LM&MA
ALM
75
1,517
0
06 Jul 2023
Fine-Grained Human Feedback Gives Better Rewards for Language Model
  Training
Fine-Grained Human Feedback Gives Better Rewards for Language Model Training
Zeqiu Wu
Yushi Hu
Weijia Shi
Nouha Dziri
Alane Suhr
Prithviraj Ammanabrolu
Noah A. Smith
Mari Ostendorf
Hannaneh Hajishirzi
ALM
35
305
0
02 Jun 2023
Self-contradictory Hallucinations of Large Language Models: Evaluation,
  Detection and Mitigation
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
Niels Mündler
Jingxuan He
Slobodan Jenko
Martin Vechev
HILM
22
108
0
25 May 2023
Self-Checker: Plug-and-Play Modules for Fact-Checking with Large
  Language Models
Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
Miaoran Li
Baolin Peng
Michel Galley
Jianfeng Gao
Zhu Zhang
LRM
HILM
KELM
37
28
0
24 May 2023
Previous
123...1089
Next