ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.01325
  4. Cited By
Learning to summarize from human feedback

Learning to summarize from human feedback

2 September 2020
Nisan Stiennon
Long Ouyang
Jeff Wu
Daniel M. Ziegler
Ryan J. Lowe
Chelsea Voss
Alec Radford
Dario Amodei
Paul Christiano
    ALM
ArXivPDFHTML

Papers citing "Learning to summarize from human feedback"

50 / 1,440 papers shown
Title
A Comprehensive Survey of Foundation Models in Medicine
A Comprehensive Survey of Foundation Models in Medicine
Wasif Khan
Seowung Leem
Kyle B. See
Joshua K. Wong
Shaoting Zhang
R. Fang
AI4CE
LM&MA
VLM
105
18
0
17 Jan 2025
Contrastive Policy Gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion
Contrastive Policy Gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion
Yannis Flet-Berliac
Nathan Grinsztajn
Florian Strub
Bill Wu
Eugene Choi
...
Arash Ahmadian
Yash Chandak
M. G. Azar
Olivier Pietquin
Matthieu Geist
OffRL
64
5
0
17 Jan 2025
Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
Chaoqi Wang
Zhuokai Zhao
Yibo Jiang
Zhaorun Chen
Chen Zhu
...
Jiayi Liu
Lizhu Zhang
Xiangjun Fan
Hao Ma
Sinong Wang
77
3
0
17 Jan 2025
Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic Hiring
Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic Hiring
Buse Sibel Korkmaz
Rahul Nair
Elizabeth M. Daly
Evangelos Anagnostopoulos
Christos Varytimidis
Antonio del Rio Chanona
40
0
0
13 Jan 2025
FocalPO: Enhancing Preference Optimizing by Focusing on Correct Preference Rankings
FocalPO: Enhancing Preference Optimizing by Focusing on Correct Preference Rankings
Tong Liu
Xiao Yu
Wenxuan Zhou
Jindong Gu
Volker Tresp
39
0
0
11 Jan 2025
MedCT: A Clinical Terminology Graph for Generative AI Applications in Healthcare
MedCT: A Clinical Terminology Graph for Generative AI Applications in Healthcare
Ye Chen
Dongdong Huang
Haoyun Xu
Cong Fu
Lin Sheng
Qingli Zhou
Yuqiang Shen
Kai Wang
VLM
MedIm
45
0
0
11 Jan 2025
LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
Tongshuang Wu
Haiyi Zhu
Maya Albayrak
Alexis Axon
Amanda Bertsch
...
Ying-Jui Tseng
Patricia Vaidos
Zhijin Wu
Wei Wu
Chenyang Yang
88
31
0
10 Jan 2025
Tailored-LLaMA: Optimizing Few-Shot Learning in Pruned LLaMA Models with Task-Specific Prompts
Tailored-LLaMA: Optimizing Few-Shot Learning in Pruned LLaMA Models with Task-Specific Prompts
Danyal Aftab
Steven Davy
ALM
49
0
0
10 Jan 2025
Utility-inspired Reward Transformations Improve Reinforcement Learning Training of Language Models
Utility-inspired Reward Transformations Improve Reinforcement Learning Training of Language Models
Roberto-Rafael Maura-Rivero
Chirag Nagpal
Roma Patel
Francesco Visin
46
1
0
08 Jan 2025
Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model
Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model
Yueqin Yin
Shentao Yang
Yujia Xie
Ziyi Yang
Yuting Sun
Hany Awadalla
Weizhu Chen
Mingyuan Zhou
50
0
0
07 Jan 2025
Improving GenIR Systems Based on User Feedback
Qingyao Ai
Zhicheng Dou
Min Zhang
147
0
0
06 Jan 2025
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Haipeng Luo
Qingfeng Sun
Can Xu
Pu Zhao
Jian-Guang Lou
...
Xiubo Geng
Qingwei Lin
Shifeng Chen
Yansong Tang
Dongmei Zhang
OSLM
LRM
110
412
0
03 Jan 2025
Enhancing Preference-based Linear Bandits via Human Response Time
Enhancing Preference-based Linear Bandits via Human Response Time
Shen Li
Yuyang Zhang
Zhaolin Ren
Claire Liang
Na Li
J. Shah
34
0
0
03 Jan 2025
Beyond Numeric Awards: In-Context Dueling Bandits with LLM Agents
Beyond Numeric Awards: In-Context Dueling Bandits with LLM Agents
Fanzeng Xia
Hao Liu
Yisong Yue
Tongxin Li
67
1
0
03 Jan 2025
An Overview and Discussion on Using Large Language Models for Implementation Generation of Solutions to Open-Ended Problems
An Overview and Discussion on Using Large Language Models for Implementation Generation of Solutions to Open-Ended Problems
Hashmath Shaik
Alex Doboli
OffRL
ELM
164
0
0
31 Dec 2024
Disentangling Preference Representation and Text Generation for Efficient Individual Preference Alignment
Disentangling Preference Representation and Text Generation for Efficient Individual Preference Alignment
Jianfei Zhang
Jun Bai
Yangqiu Song
Yanmeng Wang
Rumei Li
Chenghua Lin
Wenge Rong
44
0
0
31 Dec 2024
A Comprehensive Survey of Large Language Models and Multimodal Large Language Models in Medicine
A Comprehensive Survey of Large Language Models and Multimodal Large Language Models in Medicine
Hanguang Xiao
Feizhong Zhou
X. Liu
Tianqi Liu
Zhipeng Li
Xin Liu
Xiaoxuan Huang
AILaw
LM&MA
LRM
61
18
0
31 Dec 2024
Geometric-Averaged Preference Optimization for Soft Preference Labels
Geometric-Averaged Preference Optimization for Soft Preference Labels
Hiroki Furuta
Kuang-Huei Lee
Shixiang Shane Gu
Y. Matsuo
Aleksandra Faust
Heiga Zen
Izzeddin Gur
58
7
0
31 Dec 2024
From Generalist to Specialist: A Survey of Large Language Models for Chemistry
From Generalist to Specialist: A Survey of Large Language Models for Chemistry
Yang Han
Ziping Wan
Lu Chen
Kai Yu
Xin Chen
LM&MA
35
1
0
31 Dec 2024
Cannot or Should Not? Automatic Analysis of Refusal Composition in
  IFT/RLHF Datasets and Refusal Behavior of Black-Box LLMs
Cannot or Should Not? Automatic Analysis of Refusal Composition in IFT/RLHF Datasets and Refusal Behavior of Black-Box LLMs
Alexander von Recum
Christoph Schnabl
Gabor Hollbeck
Silas Alberti
Philip Blinde
Marvin von Hagen
92
2
0
22 Dec 2024
REFA: Reference Free Alignment for multi-preference optimization
REFA: Reference Free Alignment for multi-preference optimization
Taneesh Gupta
Rahul Madhavan
Xuchao Zhang
Chetan Bansal
Saravan Rajmohan
91
1
0
20 Dec 2024
Learning to Generate Research Idea with Dynamic Control
Learning to Generate Research Idea with Dynamic Control
Ruochen Li
Liqiang Jing
Chi Han
Jiawei Zhou
Xinya Du
LRM
87
3
0
19 Dec 2024
Energy-Based Preference Model Offers Better Offline Alignment than the
  Bradley-Terry Preference Model
Energy-Based Preference Model Offers Better Offline Alignment than the Bradley-Terry Preference Model
Yuzhong Hong
Hanshan Zhang
Junwei Bao
Hongfei Jiang
Yang Song
OffRL
79
2
0
18 Dec 2024
Dual Traits in Probabilistic Reasoning of Large Language Models
Dual Traits in Probabilistic Reasoning of Large Language Models
Shenxiong Li
Huaxia Rui
75
0
0
15 Dec 2024
Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed GFlowNets
Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed GFlowNets
Zhen Liu
Tim Z. Xiao
Weiyang Liu
Yoshua Bengio
Dinghuai Zhang
123
2
0
10 Dec 2024
MVReward: Better Aligning and Evaluating Multi-View Diffusion Models
  with Human Preferences
MVReward: Better Aligning and Evaluating Multi-View Diffusion Models with Human Preferences
Weitao Wang
Haoran Xu
Yuxiao Yang
Zhifang Liu
Jun Meng
Haoqian Wang
EGVM
87
0
0
09 Dec 2024
CPTQuant -- A Novel Mixed Precision Post-Training Quantization
  Techniques for Large Language Models
CPTQuant -- A Novel Mixed Precision Post-Training Quantization Techniques for Large Language Models
Amitash Nanda
Sree Bhargavi Balija
D. Sahoo
MQ
64
0
0
03 Dec 2024
Time-Reversal Provides Unsupervised Feedback to LLMs
Time-Reversal Provides Unsupervised Feedback to LLMs
Yerram Varun
Rahul Madhavan
Sravanti Addepalli
A. Suggala
Karthikeyan Shanmugam
Prateek Jain
LRM
SyDa
64
0
0
03 Dec 2024
Detecting Memorization in Large Language Models
Detecting Memorization in Large Language Models
Eduardo Slonski
78
0
0
02 Dec 2024
VLRewardBench: A Challenging Benchmark for Vision-Language Generative
  Reward Models
VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models
Lei Li
Y. X. Wei
Zhihui Xie
Xuqing Yang
Yifan Song
...
Tianyu Liu
Sujian Li
Bill Yuchen Lin
Lingpeng Kong
Qiang Liu
CoGe
VLM
120
24
0
26 Nov 2024
Learning from Relevant Subgoals in Successful Dialogs using Iterative
  Training for Task-oriented Dialog Systems
Learning from Relevant Subgoals in Successful Dialogs using Iterative Training for Task-oriented Dialog Systems
Magdalena Kaiser
P. Ernst
György Szarvas
72
0
0
25 Nov 2024
Self-Generated Critiques Boost Reward Modeling for Language Models
Self-Generated Critiques Boost Reward Modeling for Language Models
Yue Yu
Zhengxing Chen
Aston Zhang
L Tan
Chenguang Zhu
...
Suchin Gururangan
Chao-Yue Zhang
Melanie Kambadur
Dhruv Mahajan
Rui Hou
LRM
ALM
96
16
0
25 Nov 2024
Automatic Evaluation for Text-to-image Generation: Task-decomposed
  Framework, Distilled Training, and Meta-evaluation Benchmark
Automatic Evaluation for Text-to-image Generation: Task-decomposed Framework, Distilled Training, and Meta-evaluation Benchmark
Rong-Cheng Tu
Zi-Ao Ma
Tian Lan
Yuehao Zhao
Heyan Huang
Xian-Ling Mao
MLLM
VLM
EGVM
100
4
0
23 Nov 2024
Drowning in Documents: Consequences of Scaling Reranker Inference
Mathew Jacob
Erik Lindgren
Matei A. Zaharia
Michael Carbin
Omar Khattab
Andrew Drozdov
OffRL
76
4
0
18 Nov 2024
Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering
Xinyan Guan
Yanjiang Liu
Xinyu Lu
Boxi Cao
Xianpei Han
...
Le Sun
Jie Lou
Bowen Yu
Yunfan LU
Hongyu Lin
ALM
86
2
0
18 Nov 2024
Learning Quantitative Automata Modulo Theories
Learning Quantitative Automata Modulo Theories
Eric Hsiung
Swarat Chaudhuri
Joydeep Biswas
26
0
0
15 Nov 2024
Chain of Alignment: Integrating Public Will with Expert Intelligence for
  Language Model Alignment
Chain of Alignment: Integrating Public Will with Expert Intelligence for Language Model Alignment
Andrew Konya
Aviv Ovadya
K. J. Kevin Feng
Quan Ze Chen
Lisa Schirch
Colin Irwin
Amy X. Zhang
ALM
44
2
0
15 Nov 2024
Approximated Variational Bayesian Inverse Reinforcement Learning for
  Large Language Model Alignment
Approximated Variational Bayesian Inverse Reinforcement Learning for Large Language Model Alignment
Yuang Cai
Yuyu Yuan
Jinsheng Shi
Qinhong Lin
46
0
0
14 Nov 2024
Beyond the Safety Bundle: Auditing the Helpful and Harmless Dataset
Beyond the Safety Bundle: Auditing the Helpful and Harmless Dataset
Khaoula Chehbouni
Jonathan Colaço-Carr
Yash More
Jackie CK Cheung
G. Farnadi
78
0
0
12 Nov 2024
CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for
  Improved Prompt Engineering
CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for Improved Prompt Engineering
Ishika Joshi
Simra Shahid
Shreeya Venneti
Manushree Vasu
Yantao Zheng
Yunyao Li
Balaji Krishnamurthy
Gromit Yeuk-Yin Chan
31
3
0
09 Nov 2024
Kwai-STaR: Transform LLMs into State-Transition Reasoners
Kwai-STaR: Transform LLMs into State-Transition Reasoners
Xingyu Lu
Yihan Hu
Changyi Liu
Tianke Zhang
Zhenyu Yang
...
Fan Yang
Tingting Gao
Di Zhang
Hai-Tao Zheng
Bin Wen
LRM
37
1
0
07 Nov 2024
From Novice to Expert: LLM Agent Policy Optimization via Step-wise
  Reinforcement Learning
From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning
Zhirui Deng
Zhicheng Dou
Yichen Zhu
Zhicheng Dou
Ruibin Xiong
Mang Wang
Xin Wu
44
6
0
06 Nov 2024
Sample-Efficient Alignment for LLMs
Sample-Efficient Alignment for LLMs
Zichen Liu
Changyu Chen
Chao Du
Wee Sun Lee
Min-Bin Lin
36
3
0
03 Nov 2024
Rule Based Rewards for Language Model Safety
Rule Based Rewards for Language Model Safety
Tong Mu
Alec Helyar
Johannes Heidecke
Joshua Achiam
Andrea Vallone
Ian Kivlichan
Molly Lin
Alex Beutel
John Schulman
Lilian Weng
ALM
44
36
0
02 Nov 2024
Token-level Proximal Policy Optimization for Query Generation
Token-level Proximal Policy Optimization for Query Generation
Yichen Ouyang
Lu Wang
Fangkai Yang
Pu Zhao
Chenghua Huang
...
Saravan Rajmohan
Weiwei Deng
Dongmei Zhang
Feng Sun
Qi Zhang
OffRL
151
3
0
01 Nov 2024
Active Preference-based Learning for Multi-dimensional Personalization
Active Preference-based Learning for Multi-dimensional Personalization
Minhyeon Oh
Seungjoon Lee
Jungseul Ok
31
1
0
01 Nov 2024
Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval
Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval
Sheryl Hsu
Omar Khattab
Chelsea Finn
Archit Sharma
KELM
RALM
46
6
0
30 Oct 2024
VPO: Leveraging the Number of Votes in Preference Optimization
VPO: Leveraging the Number of Votes in Preference Optimization
Jae Hyeon Cho
Minkyung Park
Byung-Jun Lee
22
1
0
30 Oct 2024
PrefPaint: Aligning Image Inpainting Diffusion Model with Human
  Preference
PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference
Kendong Liu
Zhiyu Zhu
Chuanhao Li
Hui Liu
H. Zeng
Junhui Hou
EGVM
43
2
0
29 Oct 2024
$f$-PO: Generalizing Preference Optimization with $f$-divergence Minimization
fff-PO: Generalizing Preference Optimization with fff-divergence Minimization
Jiaqi Han
Mingjian Jiang
Yuxuan Song
J. Leskovec
Stefano Ermon
56
3
0
29 Oct 2024
Previous
123456...272829
Next