ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.01693
  4. Cited By
Fine-Grained Human Feedback Gives Better Rewards for Language Model
  Training

Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

2 June 2023
Zeqiu Wu
Yushi Hu
Weijia Shi
Nouha Dziri
Alane Suhr
Prithviraj Ammanabrolu
Noah A. Smith
Mari Ostendorf
Hannaneh Hajishirzi
    ALM
ArXivPDFHTML

Papers citing "Fine-Grained Human Feedback Gives Better Rewards for Language Model Training"

50 / 254 papers shown
Title
TLCR: Token-Level Continuous Reward for Fine-grained Reinforcement
  Learning from Human Feedback
TLCR: Token-Level Continuous Reward for Fine-grained Reinforcement Learning from Human Feedback
Eunseop Yoon
Hee Suk Yoon
Soohwan Eom
Gunsoo Han
D. W. Nam
DaeJin Jo
Kyoung-Woon On
M. Hasegawa-Johnson
Sungwoong Kim
C. Yoo
ALM
38
15
0
23 Jul 2024
Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation
Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation
Jiaming Shen
Ran Xu
Yennie Jun
Zhen Qin
Tianqi Liu
Carl Yang
Yi Liang
Simon Baumgartner
Michael Bendersky
SyDa
67
4
0
22 Jul 2024
Localizing and Mitigating Errors in Long-form Question Answering
Localizing and Mitigating Errors in Long-form Question Answering
Rachneet Sachdeva
Yixiao Song
Mohit Iyyer
Iryna Gurevych
HILM
52
0
0
16 Jul 2024
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language
  Models
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Yuzhe Gu
Ziwei Ji
Wenwei Zhang
Chengqi Lyu
Dahua Lin
Kai Chen
HILM
42
5
0
05 Jul 2024
HAF-RM: A Hybrid Alignment Framework for Reward Model Training
HAF-RM: A Hybrid Alignment Framework for Reward Model Training
Shujun Liu
Xiaoyu Shen
Yuhang Lai
Siyuan Wang
Shengbin Yue
Zengfeng Huang
Xuanjing Huang
Zhongyu Wei
31
1
0
04 Jul 2024
DogeRM: Equipping Reward Models with Domain Knowledge through Model
  Merging
DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging
Tzu-Han Lin
Chen An Li
Hung-yi Lee
Yun-Nung Chen
VLM
ALM
26
4
0
01 Jul 2024
Molecular Facts: Desiderata for Decontextualization in LLM Fact
  Verification
Molecular Facts: Desiderata for Decontextualization in LLM Fact Verification
Anisha Gunjal
Greg Durrett
HILM
58
13
0
28 Jun 2024
Decoding-Time Language Model Alignment with Multiple Objectives
Decoding-Time Language Model Alignment with Multiple Objectives
Ruizhe Shi
Yifang Chen
Yushi Hu
Alisa Liu
Hannaneh Hajishirzi
Noah A. Smith
Simon Du
49
31
0
27 Jun 2024
Understand What LLM Needs: Dual Preference Alignment for
  Retrieval-Augmented Generation
Understand What LLM Needs: Dual Preference Alignment for Retrieval-Augmented Generation
Guanting Dong
Yutao Zhu
Chenghao Zhang
Zechen Wang
Zhicheng Dou
Ji-Rong Wen
RALM
46
10
0
26 Jun 2024
Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback
  for Text-to-Image Generation
Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation
Katherine M. Collins
Najoung Kim
Yonatan Bitton
Verena Rieser
Shayegan Omidshafiei
...
Gang Li
Adrian Weller
Junfeng He
Deepak Ramachandran
Krishnamurthy Dvijotham
EGVM
47
3
0
24 Jun 2024
INDICT: Code Generation with Internal Dialogues of Critiques for Both
  Security and Helpfulness
INDICT: Code Generation with Internal Dialogues of Critiques for Both Security and Helpfulness
Hung Le
Yingbo Zhou
Caiming Xiong
Silvio Savarese
Doyen Sahoo
52
2
0
23 Jun 2024
Hybrid Alignment Training for Large Language Models
Hybrid Alignment Training for Large Language Models
Chenglong Wang
Hang Zhou
Kaiyan Chang
Bei Li
Yongyu Mu
Tong Xiao
Tongran Liu
Jingbo Zhu
43
4
0
21 Jun 2024
Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning
Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning
Chaojie Wang
Yanchen Deng
Zhiyi Lyu
Liang Zeng
Jujie He
Shuicheng Yan
Bo An
LRM
ReLM
42
52
0
20 Jun 2024
MACAROON: Training Vision-Language Models To Be Your Engaged Partners
MACAROON: Training Vision-Language Models To Be Your Engaged Partners
Shujin Wu
Yi R. Fung
Sha Li
Yixin Wan
Kai-Wei Chang
Heng Ji
47
5
0
20 Jun 2024
FoRAG: Factuality-optimized Retrieval Augmented Generation for
  Web-enhanced Long-form Question Answering
FoRAG: Factuality-optimized Retrieval Augmented Generation for Web-enhanced Long-form Question Answering
Tianchi Cai
Zhiwen Tan
Xierui Song
Tao Sun
Jiyan Jiang
Yunqi Xu
Yinger Zhang
Jinjie Gu
32
5
0
19 Jun 2024
Towards Minimal Targeted Updates of Language Models with Targeted
  Negative Training
Towards Minimal Targeted Updates of Language Models with Targeted Negative Training
Lily H. Zhang
Rajesh Ranganath
Arya Tafvizi
36
1
0
19 Jun 2024
Self and Cross-Model Distillation for LLMs: Effective Methods for
  Refusal Pattern Alignment
Self and Cross-Model Distillation for LLMs: Effective Methods for Refusal Pattern Alignment
Jie Li
Yi Liu
Chongyang Liu
Xiaoning Ren
Ling Shi
Weisong Sun
Yinxing Xue
37
0
0
17 Jun 2024
A Survey on Human Preference Learning for Large Language Models
A Survey on Human Preference Learning for Large Language Models
Ruili Jiang
Kehai Chen
Xuefeng Bai
Zhixuan He
Juntao Li
Muyun Yang
Tiejun Zhao
Liqiang Nie
Min Zhang
49
8
0
17 Jun 2024
Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for
  Cartoon Captioning
Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning
Jifan Zhang
Lalit P. Jain
Yang Guo
Jiayi Chen
Kuan Lok Zhou
...
Scott Sievert
Timothy T. Rogers
Kevin Jamieson
Robert Mankoff
Robert Nowak
39
5
0
15 Jun 2024
Unpacking DPO and PPO: Disentangling Best Practices for Learning from
  Preference Feedback
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback
Hamish Ivison
Yizhong Wang
Jiacheng Liu
Zeqiu Wu
Valentina Pyatkin
Nathan Lambert
Noah A. Smith
Yejin Choi
Hannaneh Hajishirzi
46
41
0
13 Jun 2024
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning
  in LLMs
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs
Xuan Zhang
Chao Du
Tianyu Pang
Qian Liu
Wei Gao
Min Lin
LRM
AI4CE
44
34
0
13 Jun 2024
PAL: Pluralistic Alignment Framework for Learning from Heterogeneous
  Preferences
PAL: Pluralistic Alignment Framework for Learning from Heterogeneous Preferences
Daiwei Chen
Yi Chen
Aniket Rege
Ramya Korlakai Vinayak
46
17
0
12 Jun 2024
Discovering Preference Optimization Algorithms with and for Large
  Language Models
Discovering Preference Optimization Algorithms with and for Large Language Models
Chris Xiaoxuan Lu
Samuel Holt
Claudio Fanconi
Alex J. Chan
Jakob Foerster
M. Schaar
R. T. Lange
OffRL
40
16
0
12 Jun 2024
Language Models are Alignable Decision-Makers: Dataset and Application
  to the Medical Triage Domain
Language Models are Alignable Decision-Makers: Dataset and Application to the Medical Triage Domain
Brian Hu
Bill Ray
Alice Leung
Amy Summerville
David Joy
Christopher Funk
Arslan Basharat
33
2
0
10 Jun 2024
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent
  Debate Framework
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework
Xiaoxi Sun
Jinpeng Li
Yan Zhong
Dongyan Zhao
Rui Yan
LLMAG
HILM
29
5
0
05 Jun 2024
HYDRA: Model Factorization Framework for Black-Box LLM Personalization
HYDRA: Model Factorization Framework for Black-Box LLM Personalization
Yuchen Zhuang
Haotian Sun
Yue Yu
Rushi Qiang
Qifan Wang
Chao Zhang
Bo Dai
AAML
53
15
0
05 Jun 2024
Aligning Large Language Models via Fine-grained Supervision
Aligning Large Language Models via Fine-grained Supervision
Dehong Xu
Liang Qiu
Minseok Kim
Faisal Ladhak
Jaeyoung Do
43
2
0
04 Jun 2024
Process-Driven Autoformalization in Lean 4
Process-Driven Autoformalization in Lean 4
Jianqiao Lu
Zhengying Liu
Yingjia Wan
Yinya Huang
Haiming Wang
Zhicheng YANG
Jing Tang
Zhijiang Guo
AI4CE
45
16
0
04 Jun 2024
Dishonesty in Helpful and Harmless Alignment
Dishonesty in Helpful and Harmless Alignment
Youcheng Huang
Jingkun Tang
Duanyu Feng
Zheng-Wei Zhang
Wenqiang Lei
Jiancheng Lv
Anthony G. Cohn
LLMSV
46
3
0
04 Jun 2024
Aligning Language Models with Demonstrated Feedback
Aligning Language Models with Demonstrated Feedback
Omar Shaikh
Michelle S. Lam
Joey Hejna
Yijia Shao
Michael S. Bernstein
Michael S. Bernstein
Diyi Yang
ALM
36
24
0
02 Jun 2024
ANAH: Analytical Annotation of Hallucinations in Large Language Models
ANAH: Analytical Annotation of Hallucinations in Large Language Models
Ziwei Ji
Yuzhe Gu
Wenwei Zhang
Chengqi Lyu
Dahua Lin
Kai-xiang Chen
HILM
56
2
0
30 May 2024
NoiseBoost: Alleviating Hallucination with Noise Perturbation for
  Multimodal Large Language Models
NoiseBoost: Alleviating Hallucination with Noise Perturbation for Multimodal Large Language Models
Kai Wu
Boyuan Jiang
Zhengkai Jiang
Qingdong He
Donghao Luo
Shengzhi Wang
Qingwen Liu
Chengjie Wang
VLM
MLLM
32
3
0
30 May 2024
Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural
  Language Understanding
Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding
Kuo Liao
Shuang Li
Meng Zhao
Liqun Liu
Mengge Xue
Zhenyu Hu
Honglin Han
Chengguo Yin
40
1
0
30 May 2024
Aligning to Thousands of Preferences via System Message Generalization
Aligning to Thousands of Preferences via System Message Generalization
Seongyun Lee
Sue Hyun Park
Seungone Kim
Minjoon Seo
ALM
44
38
0
28 May 2024
Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from
  Human Input
Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input
Andi Peng
Yuying Sun
Tianmin Shu
David Abel
46
3
0
23 May 2024
Hummer: Towards Limited Competitive Preference Dataset
Hummer: Towards Limited Competitive Preference Dataset
Li Jiang
Yusen Wu
Junwu Xiong
Jingqing Ruan
Yichuan Ding
Qingpei Guo
Zujie Wen
Jun Zhou
Xiaotie Deng
34
6
0
19 May 2024
WildChat: 1M ChatGPT Interaction Logs in the Wild
WildChat: 1M ChatGPT Interaction Logs in the Wild
Wenting Zhao
Xiang Ren
Jack Hessel
Claire Cardie
Yejin Choi
Yuntian Deng
44
180
0
02 May 2024
The Real, the Better: Aligning Large Language Models with Online Human
  Behaviors
The Real, the Better: Aligning Large Language Models with Online Human Behaviors
Guanying Jiang
Lingyong Yan
Haibo Shi
Dawei Yin
33
2
0
01 May 2024
Monte Carlo Tree Search Boosts Reasoning via Iterative Preference
  Learning
Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning
Yuxi Xie
Anirudh Goyal
Wenyue Zheng
Min-Yen Kan
Timothy Lillicrap
Kenji Kawaguchi
Michael Shieh
ReLM
LRM
52
87
0
01 May 2024
RLHF from Heterogeneous Feedback via Personalization and Preference
  Aggregation
RLHF from Heterogeneous Feedback via Personalization and Preference Aggregation
Chanwoo Park
Mingyang Liu
Dingwen Kong
Kaiqing Zhang
Asuman Ozdaglar
44
30
0
30 Apr 2024
DPO Meets PPO: Reinforced Token Optimization for RLHF
DPO Meets PPO: Reinforced Token Optimization for RLHF
Han Zhong
Zikang Shan
Guhao Feng
Li Zhao
Di He
Jiang Bian
Di He
Jiang Bian
Liwei Wang
57
57
0
29 Apr 2024
InspectorRAGet: An Introspection Platform for RAG Evaluation
InspectorRAGet: An Introspection Platform for RAG Evaluation
Kshitij P. Fadnis
Siva Sankalp Patel
O. Boni
Yannis Katsis
Sara Rosenthal
Benjamin Sznajder
Marina Danilevsky
40
2
0
26 Apr 2024
When to Trust LLMs: Aligning Confidence with Response Quality
When to Trust LLMs: Aligning Confidence with Response Quality
Shuchang Tao
Liuyi Yao
Hanxing Ding
Yuexiang Xie
Qi Cao
Fei Sun
Jinyang Gao
Huawei Shen
Bolin Ding
37
15
0
26 Apr 2024
Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact
  Checking News Claims with Black-Box LLM
Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLM
Xuan Zhang
Wei Gao
LRM
KELM
40
8
0
26 Apr 2024
Small Language Models Need Strong Verifiers to Self-Correct Reasoning
Small Language Models Need Strong Verifiers to Self-Correct Reasoning
Yunxiang Zhang
Muhammad Khalifa
Lajanugen Logeswaran
Jaekyeom Kim
Moontae Lee
Honglak Lee
Lu Wang
LRM
KELM
ReLM
31
31
0
26 Apr 2024
WorldValuesBench: A Large-Scale Benchmark Dataset for Multi-Cultural
  Value Awareness of Language Models
WorldValuesBench: A Large-Scale Benchmark Dataset for Multi-Cultural Value Awareness of Language Models
Wenlong Zhao
Debanjan Mondal
Niket Tandon
Danica Dillion
Kurt Gray
Yuling Gu
VLM
37
11
0
25 Apr 2024
Mapping Social Choice Theory to RLHF
Mapping Social Choice Theory to RLHF
Jessica Dai
Eve Fleisig
35
12
0
19 Apr 2024
Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual
  Alignment
Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment
Zhaofeng Wu
Ananth Balashankar
Yoon Kim
Jacob Eisenstein
Ahmad Beirami
46
13
0
18 Apr 2024
Token-level Direct Preference Optimization
Token-level Direct Preference Optimization
Yongcheng Zeng
Guoqing Liu
Weiyu Ma
Ning Yang
Haifeng Zhang
Jun Wang
24
42
0
18 Apr 2024
Social Choice Should Guide AI Alignment in Dealing with Diverse Human
  Feedback
Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback
Vincent Conitzer
Rachel Freedman
J. Heitzig
Wesley H. Holliday
Bob M. Jacobs
...
Eric Pacuit
Stuart Russell
Hailey Schoelkopf
Emanuel Tewolde
W. Zwicker
43
30
0
16 Apr 2024
Previous
123456
Next