ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.01325
  4. Cited By
Learning to summarize from human feedback
v1v2v3 (latest)

Learning to summarize from human feedback

2 September 2020
Nisan Stiennon
Long Ouyang
Jeff Wu
Daniel M. Ziegler
Ryan J. Lowe
Chelsea Voss
Alec Radford
Dario Amodei
Paul Christiano
    ALM
ArXiv (abs)PDFHTML

Papers citing "Learning to summarize from human feedback"

50 / 1,548 papers shown
Title
Impact of Voice Fidelity on Decision Making: A Potential Dark Pattern?
Impact of Voice Fidelity on Decision Making: A Potential Dark Pattern?
Mateusz Dubiel
Anastasia Sergeeva
Luis A. Leiva
63
11
0
10 Feb 2024
Corruption Robust Offline Reinforcement Learning with Human Feedback
Corruption Robust Offline Reinforcement Learning with Human Feedback
Debmalya Mandal
Andi Nika
Parameswaran Kamalaruban
Adish Singla
Goran Radanović
OffRL
95
11
0
09 Feb 2024
Feedback Loops With Language Models Drive In-Context Reward Hacking
Feedback Loops With Language Models Drive In-Context Reward Hacking
Alexander Pan
Erik Jones
Meena Jagadeesan
Jacob Steinhardt
KELM
100
33
0
09 Feb 2024
Aya Dataset: An Open-Access Collection for Multilingual Instruction
  Tuning
Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning
Shivalika Singh
Freddie Vargus
Daniel D'souza
Börje F. Karlsson
Abinaya Mahendiran
...
Max Bartolo
Julia Kreutzer
Ahmet Üstün
Marzieh Fadaee
Sara Hooker
231
127
0
09 Feb 2024
Scalable Interactive Machine Learning for Future Command and Control
Scalable Interactive Machine Learning for Future Command and Control
Anna Madison
Ellen R. Novoseller
Vinicius G. Goecks
Benjamin T. Files
Nicholas R. Waytowich
Alfred Yu
Vernon J. Lawhern
Steven Thurman
Christopher Kelshaw
Kaleb McDowell
70
4
0
09 Feb 2024
V-STaR: Training Verifiers for Self-Taught Reasoners
V-STaR: Training Verifiers for Self-Taught Reasoners
Arian Hosseini
Xingdi Yuan
Nikolay Malkin
Rameswar Panda
Alessandro Sordoni
Rishabh Agarwal
ReLMLRM
121
137
0
09 Feb 2024
Training Large Language Models for Reasoning through Reverse Curriculum
  Reinforcement Learning
Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning
Zhiheng Xi
Wenxiang Chen
Boyang Hong
Senjie Jin
Rui Zheng
...
Xinbo Zhang
Peng Sun
Tao Gui
Qi Zhang
Xuanjing Huang
LRM
74
28
0
08 Feb 2024
Generalized Preference Optimization: A Unified Approach to Offline
  Alignment
Generalized Preference Optimization: A Unified Approach to Offline Alignment
Yunhao Tang
Z. Guo
Zeyu Zheng
Daniele Calandriello
Rémi Munos
Mark Rowland
Pierre Harvey Richemond
Michal Valko
Bernardo Avila-Pires
Bilal Piot
83
121
0
08 Feb 2024
A Survey on Safe Multi-Modal Learning System
A Survey on Safe Multi-Modal Learning System
Tianyi Zhao
Liangliang Zhang
Yao Ma
Lu Cheng
158
14
0
08 Feb 2024
Pedagogical Alignment of Large Language Models
Pedagogical Alignment of Large Language Models
Shashank Sonkar
Kangqi Ni
Sapana Chaudhary
Richard G. Baraniuk
AI4Ed
44
9
0
07 Feb 2024
Direct Language Model Alignment from Online AI Feedback
Direct Language Model Alignment from Online AI Feedback
Shangmin Guo
Biao Zhang
Tianlin Liu
Tianqi Liu
Misha Khalman
...
Thomas Mesnard
Yao-Min Zhao
Bilal Piot
Johan Ferret
Mathieu Blondel
ALM
114
160
0
07 Feb 2024
TransLLaMa: LLM-based Simultaneous Translation System
TransLLaMa: LLM-based Simultaneous Translation System
Roman Koshkin
Katsuhito Sudoh
Satoshi Nakamura
55
26
0
07 Feb 2024
MusicRL: Aligning Music Generation to Human Preferences
MusicRL: Aligning Music Generation to Human Preferences
Geoffrey Cideron
Sertan Girgin
Mauro Verzetti
Damien Vincent
Matej Kastelic
...
Olivier Pietquin
Matthieu Geist
Léonard Hussenot
Neil Zeghidour
A. Agostinelli
91
22
0
06 Feb 2024
Harnessing the Plug-and-Play Controller by Prompting
Harnessing the Plug-and-Play Controller by Prompting
Hao Wang
Lei Sha
70
4
0
06 Feb 2024
Personalized Language Modeling from Personalized Human Feedback
Personalized Language Modeling from Personalized Human Feedback
Xinyu Li
Zachary C. Lipton
Liu Leqi
ALM
134
59
0
06 Feb 2024
Learning to Generate Explainable Stock Predictions using Self-Reflective
  Large Language Models
Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models
Kelvin J.L. Koa
Yunshan Ma
Ritchie Ng
Tat-Seng Chua
AIFinLLMAG
108
31
0
06 Feb 2024
Toward Human-AI Alignment in Large-Scale Multi-Player Games
Toward Human-AI Alignment in Large-Scale Multi-Player Games
Sugandha Sharma
Guy Davidson
Khimya Khetarpal
Anssi Kanervisto
Udit Arora
Katja Hofmann
Ida Momennejad
85
0
0
05 Feb 2024
Nevermind: Instruction Override and Moderation in Large Language Models
Nevermind: Instruction Override and Moderation in Large Language Models
Edward Kim
ALM
41
1
0
05 Feb 2024
Preference-Conditioned Language-Guided Abstraction
Preference-Conditioned Language-Guided Abstraction
Andi Peng
Andreea Bobu
Belinda Z. Li
T. Sumers
Ilia Sucholutsky
Nishanth Kumar
Thomas Griffiths
Julie A. Shah
83
13
0
05 Feb 2024
Decoding-time Realignment of Language Models
Decoding-time Realignment of Language Models
Tianlin Liu
Shangmin Guo
Leonardo Bianco
Daniele Calandriello
Quentin Berthet
Felipe Llinares-López
Jessica Hoffmann
Lucas Dixon
Michal Valko
Mathieu Blondel
AI4CE
124
46
0
05 Feb 2024
IllusionX: An LLM-powered mixed reality personal companion
IllusionX: An LLM-powered mixed reality personal companion
Ramez Yousri
Zeyad Essam
Yehia Kareem
Youstina Sherief
Sherry Gamil
Soha Safwat
70
4
0
04 Feb 2024
BRAIn: Bayesian Reward-conditioned Amortized Inference for natural
  language generation from feedback
BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback
Gaurav Pandey
Yatin Nandwani
Tahira Naseem
Mayank Mishra
Guangxuan Xu
Dinesh Raghu
Sachindra Joshi
Asim Munawar
Ramón Fernández Astudillo
BDL
68
4
0
04 Feb 2024
Diversity Measurement and Subset Selection for Instruction Tuning
  Datasets
Diversity Measurement and Subset Selection for Instruction Tuning Datasets
Peiqi Wang
Songlin Yang
Zhen Guo
Matt Stallone
Yoon Kim
Polina Golland
Yikang Shen
85
12
0
04 Feb 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Jiong Wang
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
83
6
0
02 Feb 2024
The RL/LLM Taxonomy Tree: Reviewing Synergies Between Reinforcement
  Learning and Large Language Models
The RL/LLM Taxonomy Tree: Reviewing Synergies Between Reinforcement Learning and Large Language Models
M. Pternea
Prerna Singh
Abir Chakraborty
Y. Oruganti
M. Milletarí
Sayli Bapat
Kebei Jiang
OffRL
82
10
0
02 Feb 2024
Foundation Model Sherpas: Guiding Foundation Models through Knowledge
  and Reasoning
Foundation Model Sherpas: Guiding Foundation Models through Knowledge and Reasoning
D. Bhattacharjya
Junkyu Lee
Don Joven Agravante
Balaji Ganesan
Radu Marinescu
LLMAG
54
1
0
02 Feb 2024
Rethinking the Role of Proxy Rewards in Language Model Alignment
Rethinking the Role of Proxy Rewards in Language Model Alignment
Sungdong Kim
Minjoon Seo
SyDaALM
67
2
0
02 Feb 2024
KTO: Model Alignment as Prospect Theoretic Optimization
KTO: Model Alignment as Prospect Theoretic Optimization
Kawin Ethayarajh
Winnie Xu
Niklas Muennighoff
Dan Jurafsky
Douwe Kiela
316
570
0
02 Feb 2024
DTS-SQL: Decomposed Text-to-SQL with Small Large Language Models
DTS-SQL: Decomposed Text-to-SQL with Small Large Language Models
Mohammadreza Pourreza
Davood Rafiei
78
30
0
02 Feb 2024
Plan-Grounded Large Language Models for Dual Goal Conversational
  Settings
Plan-Grounded Large Language Models for Dual Goal Conversational Settings
Diogo Glória-Silva
Rafael Ferreira
Diogo Tavares
David Semedo
João Magalhães
LLMAG
85
4
0
01 Feb 2024
Towards Efficient Exact Optimization of Language Model Alignment
Towards Efficient Exact Optimization of Language Model Alignment
Haozhe Ji
Cheng Lu
Yilin Niu
Pei Ke
Hongning Wang
Jun Zhu
Jie Tang
Minlie Huang
97
20
0
01 Feb 2024
Dense Reward for Free in Reinforcement Learning from Human Feedback
Dense Reward for Free in Reinforcement Learning from Human Feedback
Alex J. Chan
Hao Sun
Samuel Holt
M. Schaar
105
42
0
01 Feb 2024
Transforming and Combining Rewards for Aligning Large Language Models
Transforming and Combining Rewards for Aligning Large Language Models
Zihao Wang
Chirag Nagpal
Jonathan Berant
Jacob Eisenstein
Alex DÁmour
Oluwasanmi Koyejo
Victor Veitch
97
16
0
01 Feb 2024
Efficient Exploration for LLMs
Efficient Exploration for LLMs
Vikranth Dwaracherla
S. Asghari
Botao Hao
Benjamin Van Roy
LLMAG
98
22
0
01 Feb 2024
Iterative Data Smoothing: Mitigating Reward Overfitting and
  Overoptimization in RLHF
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Banghua Zhu
Michael I. Jordan
Jiantao Jiao
84
33
0
29 Jan 2024
Mapping the Design Space of Teachable Social Media Feed Experiences
Mapping the Design Space of Teachable Social Media Feed Experiences
K. J. Kevin Feng
Xander Koo
Lawrence Tan
Amy Bruckman
David W. McDonald
Amy X. Zhang
110
15
0
25 Jan 2024
Towards Consistent Natural-Language Explanations via
  Explanation-Consistency Finetuning
Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning
Yanda Chen
Chandan Singh
Xiaodong Liu
Simiao Zuo
Bin Yu
He He
Jianfeng Gao
LRM
83
14
0
25 Jan 2024
Instruction Fine-Tuning: Does Prompt Loss Matter?
Instruction Fine-Tuning: Does Prompt Loss Matter?
Mathew Huerta-Enochian
Seung Yong Ko
71
7
0
24 Jan 2024
Can AI Assistants Know What They Don't Know?
Can AI Assistants Know What They Don't Know?
Qinyuan Cheng
Tianxiang Sun
Xiangyang Liu
Wenwei Zhang
Zhangyue Yin
Shimin Li
Linyang Li
Zhengfu He
Kai Chen
Xipeng Qiu
113
27
0
24 Jan 2024
ARGS: Alignment as Reward-Guided Search
ARGS: Alignment as Reward-Guided Search
Maxim Khanov
Jirayu Burapacheep
Yixuan Li
130
62
0
23 Jan 2024
Improving Machine Translation with Human Feedback: An Exploration of
  Quality Estimation as a Reward Model
Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model
Zhiwei He
Xing Wang
Wenxiang Jiao
Zhuosheng Zhang
Rui Wang
Shuming Shi
Zhaopeng Tu
ALM
109
27
0
23 Jan 2024
WARM: On the Benefits of Weight Averaged Reward Models
WARM: On the Benefits of Weight Averaged Reward Models
Alexandre Ramé
Nino Vieillard
Léonard Hussenot
Robert Dadashi
Geoffrey Cideron
Olivier Bachem
Johan Ferret
194
104
0
22 Jan 2024
Mastering Text-to-Image Diffusion: Recaptioning, Planning, and
  Generating with Multimodal LLMs
Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs
Ling Yang
Zhaochen Yu
Chenlin Meng
Minkai Xu
Stefano Ermon
Tengjiao Wang
CoGeDiffM
136
137
0
22 Jan 2024
Linear Alignment: A Closed-form Solution for Aligning Human Preferences
  without Tuning and Feedback
Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback
Songyang Gao
Qiming Ge
Wei Shen
Shihan Dou
Junjie Ye
...
Yicheng Zou
Zhi Chen
Hang Yan
Qi Zhang
Dahua Lin
95
11
0
21 Jan 2024
Reinforcement learning for question answering in programming domain
  using public community scoring as a human feedback
Reinforcement learning for question answering in programming domain using public community scoring as a human feedback
Alexey Gorbatovski
Sergey Kovalchuk
27
3
0
19 Jan 2024
Self-Rewarding Language Models
Self-Rewarding Language Models
Weizhe Yuan
Richard Yuanzhe Pang
Kyunghyun Cho
Xian Li
Sainbayar Sukhbaatar
Jing Xu
Jason Weston
ReLMSyDaALMLRM
432
340
0
18 Jan 2024
Tuning Language Models by Proxy
Tuning Language Models by Proxy
Alisa Liu
Xiaochuang Han
Yizhong Wang
Yulia Tsvetkov
Yejin Choi
Noah A. Smith
ALM
98
52
0
16 Jan 2024
EmoLLMs: A Series of Emotional Large Language Models and Annotation
  Tools for Comprehensive Affective Analysis
EmoLLMs: A Series of Emotional Large Language Models and Annotation Tools for Comprehensive Affective Analysis
Zhiwei Liu
Kailai Yang
Tianlin Zhang
Qianqian Xie
Sophia Ananiadou
74
52
0
16 Jan 2024
Beyond Sparse Rewards: Enhancing Reinforcement Learning with Language
  Model Critique in Text Generation
Beyond Sparse Rewards: Enhancing Reinforcement Learning with Language Model Critique in Text Generation
Meng Cao
Lei Shu
Lei Yu
Yun Zhu
Nevan Wichers
Yinxiao Liu
Lei Meng
OffRLALM
64
7
0
14 Jan 2024
Small Language Model Can Self-correct
Small Language Model Can Self-correct
Haixia Han
Jiaqing Liang
Jie Shi
Qi He
Yanghua Xiao
LRMSyDaReLMKELM
106
15
0
14 Jan 2024
Previous
123...181920...293031
Next