ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.01691
  4. Cited By
FactAlign: Long-form Factuality Alignment of Large Language Models

FactAlign: Long-form Factuality Alignment of Large Language Models

2 October 2024
Chao-Wei Huang
Yun-Nung Chen
    HILM
ArXiv (abs)PDFHTMLGithub (18★)

Papers citing "FactAlign: Long-form Factuality Alignment of Large Language Models"

4 / 4 papers shown
Title
MedScore: Factuality Evaluation of Free-Form Medical Answers
MedScore: Factuality Evaluation of Free-Form Medical Answers
Heyuan Huang
Alexandra DeLucia
Vijay Murari Tiyyala
Mark Dredze
HILMMedIm
66
0
0
24 May 2025
Atomic Consistency Preference Optimization for Long-Form Question Answering
Atomic Consistency Preference Optimization for Long-Form Question Answering
Jingfeng Chen
Raghuveer Thirukovalluru
Junlin Wang
Kaiwei Luo
Bhuwan Dhingra
KELMHILM
78
0
0
14 May 2025
The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground Responses to Long-Form Input
Alon Jacovi
Andrew Wang
Chris Alberti
Connie Tao
Jon Lipovetz
...
Rachana Fellinger
Rui Wang
Zizhao Zhang
Sasha Goldshtein
Dipanjan Das
HILMALM
203
17
0
06 Jan 2025
Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators
Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators
Yann Dubois
Balázs Galambosi
Percy Liang
Tatsunori Hashimoto
ALM
179
403
0
06 Apr 2024
1