ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.01691
  4. Cited By
FactAlign: Long-form Factuality Alignment of Large Language Models

FactAlign: Long-form Factuality Alignment of Large Language Models

2 October 2024
Chao-Wei Huang
Yun-Nung Chen
    HILM
ArXivPDFHTML

Papers citing "FactAlign: Long-form Factuality Alignment of Large Language Models"

2 / 2 papers shown
Title
Atomic Consistency Preference Optimization for Long-Form Question Answering
Atomic Consistency Preference Optimization for Long-Form Question Answering
Jingfeng Chen
Raghuveer Thirukovalluru
Junlin Wang
Kaiwei Luo
Bhuwan Dhingra
KELM
HILM
20
0
0
14 May 2025
The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground Responses to Long-Form Input
Alon Jacovi
Andrew Wang
Chris Alberti
Connie Tao
Jon Lipovetz
...
Rachana Fellinger
Rui Wang
Zizhao Zhang
Sasha Goldshtein
Dipanjan Das
HILM
ALM
93
13
0
06 Jan 2025
1