Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.01691
Cited By
FactAlign: Long-form Factuality Alignment of Large Language Models
2 October 2024
Chao-Wei Huang
Yun-Nung Chen
HILM
Re-assign community
ArXiv (abs)
PDF
HTML
Github (18★)
Papers citing
"FactAlign: Long-form Factuality Alignment of Large Language Models"
4 / 4 papers shown
Title
MedScore: Factuality Evaluation of Free-Form Medical Answers
Heyuan Huang
Alexandra DeLucia
Vijay Murari Tiyyala
Mark Dredze
HILM
MedIm
66
0
0
24 May 2025
Atomic Consistency Preference Optimization for Long-Form Question Answering
Jingfeng Chen
Raghuveer Thirukovalluru
Junlin Wang
Kaiwei Luo
Bhuwan Dhingra
KELM
HILM
78
0
0
14 May 2025
The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground Responses to Long-Form Input
Alon Jacovi
Andrew Wang
Chris Alberti
Connie Tao
Jon Lipovetz
...
Rachana Fellinger
Rui Wang
Zizhao Zhang
Sasha Goldshtein
Dipanjan Das
HILM
ALM
203
17
0
06 Jan 2025
Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators
Yann Dubois
Balázs Galambosi
Percy Liang
Tatsunori Hashimoto
ALM
179
403
0
06 Apr 2024
1