Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.01325
Cited By
Learning to summarize from human feedback
2 September 2020
Nisan Stiennon
Long Ouyang
Jeff Wu
Daniel M. Ziegler
Ryan J. Lowe
Chelsea Voss
Alec Radford
Dario Amodei
Paul Christiano
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning to summarize from human feedback"
50 / 1,441 papers shown
Title
Vision-Language Models as Success Detectors
Yuqing Du
Ksenia Konyushkova
Misha Denil
A. Raju
Jessica Landon
Felix Hill
Nando de Freitas
Serkan Cabi
MLLM
LRM
91
77
0
13 Mar 2023
ChatGPT Asks, BLIP-2 Answers: Automatic Questioning Towards Enriched Visual Descriptions
Deyao Zhu
Jun Chen
Kilichbek Haydarov
Xiaoqian Shen
Wenxuan Zhang
Mohamed Elhoseiny
MLLM
42
97
0
12 Mar 2023
Rewarding Chatbots for Real-World Engagement with Millions of Users
R. Irvine
D. Boubert
Vyas Raina
Adian Liusie
Ziyi Zhu
...
Valentin Assassi
Christie-Carol Beauchamp
Xiaoding Lu
Thomas Rialan
W. Beauchamp
ALM
30
37
0
10 Mar 2023
Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback
Hannah Rose Kirk
Bertie Vidgen
Paul Röttger
Scott A. Hale
41
100
0
09 Mar 2023
Learning the Legibility of Visual Text Perturbations
D. Seth
Rickard Stureborg
Danish Pruthi
Bhuwan Dhingra
AAML
54
4
0
09 Mar 2023
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models
Chenfei Wu
Sheng-Kai Yin
Weizhen Qi
Xiaodong Wang
Zecheng Tang
Nan Duan
MLLM
LRM
53
614
0
08 Mar 2023
Automatically Auditing Large Language Models via Discrete Optimization
Erik Jones
Anca Dragan
Aditi Raghunathan
Jacob Steinhardt
48
158
0
08 Mar 2023
A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT
Yihan Cao
Siyu Li
Yixin Liu
Zhiling Yan
Yutong Dai
Philip S. Yu
Lichao Sun
38
508
0
07 Mar 2023
Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles
Zhiwei Tang
Dmitry Rybin
Tsung-Hui Chang
ALM
DiffM
39
26
0
07 Mar 2023
Preference Transformer: Modeling Human Preferences using Transformers for RL
Changyeon Kim
Jongjin Park
Jinwoo Shin
Honglak Lee
Pieter Abbeel
Kimin Lee
OffRL
41
62
0
02 Mar 2023
Active Reward Learning from Multiple Teachers
Peter Barnett
Rachel Freedman
Justin Svegliato
Stuart J. Russell
30
14
0
02 Mar 2023
Zero-Shot Cross-Lingual Summarization via Large Language Models
Jiaan Wang
Yunlong Liang
Fandong Meng
Beiqi Zou
Zhixu Li
Jianfeng Qu
Jie Zhou
ELM
29
28
0
28 Feb 2023
A Human-Centered Safe Robot Reinforcement Learning Framework with Interactive Behaviors
Shangding Gu
Alap Kshirsagar
Yali Du
Guang Chen
Jan Peters
Alois C. Knoll
34
14
0
25 Feb 2023
Reward Learning as Doubly Nonparametric Bandits: Optimal Design and Scaling Laws
Kush S. Bhatia
Wenshuo Guo
Jacob Steinhardt
27
0
0
23 Feb 2023
In What Languages are Generative Language Models the Most Formal? Analyzing Formality Distribution across Languages
Asim Ersoy
Gerson Vizcarra
T. Mayeesha
Benjamin Muller
28
2
0
23 Feb 2023
Aligning Text-to-Image Models using Human Feedback
Kimin Lee
Hao Liu
Moonkyung Ryu
Olivia Watkins
Yuqing Du
Craig Boutilier
Pieter Abbeel
Mohammad Ghavamzadeh
S. Gu
EGVM
53
256
0
23 Feb 2023
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Kai Greshake
Sahar Abdelnabi
Shailesh Mishra
C. Endres
Thorsten Holz
Mario Fritz
SILM
49
439
0
23 Feb 2023
Language Model Crossover: Variation through Few-Shot Prompting
Elliot Meyerson
M. Nelson
Herbie Bradley
Adam Gaier
Arash Moradi
Amy K. Hoover
Joel Lehman
VLM
45
79
0
23 Feb 2023
Guiding Large Language Models via Directional Stimulus Prompting
Zekun Li
Baolin Peng
Pengcheng He
Michel Galley
Jianfeng Gao
Xi Yan
LLMAG
LRM
LM&Ro
40
95
0
22 Feb 2023
Machine Love
Joel Lehman
28
5
0
18 Feb 2023
Auditing large language models: a three-layered approach
Jakob Mokander
Jonas Schuett
Hannah Rose Kirk
Luciano Floridi
AILaw
MLAU
48
196
0
16 Feb 2023
Tuning computer vision models with task rewards
André Susano Pinto
Alexander Kolesnikov
Yuge Shi
Lucas Beyer
Xiaohua Zhai
VLM
27
40
0
16 Feb 2023
Aligning Language Models with Preferences through f-divergence Minimization
Dongyoung Go
Tomasz Korbak
Germán Kruszewski
Jos Rozen
Nahyeon Ryu
Marc Dymetman
35
70
0
16 Feb 2023
Augmented Language Models: a Survey
Grégoire Mialon
Roberto Dessì
Maria Lomeli
Christoforos Nalmpantis
Ramakanth Pasunuru
...
Jane Dwivedi-Yu
Asli Celikyilmaz
Edouard Grave
Yann LeCun
Thomas Scialom
LRM
KELM
47
368
0
15 Feb 2023
The Capacity for Moral Self-Correction in Large Language Models
Deep Ganguli
Amanda Askell
Nicholas Schiefer
Thomas I. Liao
Kamil.e Lukovsiut.e
...
Tom B. Brown
C. Olah
Jack Clark
Sam Bowman
Jared Kaplan
LRM
ReLM
45
159
0
15 Feb 2023
Synthesizing Human Gaze Feedback for Improved NLP Performance
Varun Khurana
Yaman Kumar Singla
Nora Hollenstein
R. Kumar
Balaji Krishnamurthy
13
15
0
11 Feb 2023
The Wisdom of Hindsight Makes Language Models Better Instruction Followers
Tianjun Zhang
Fangchen Liu
Justin Wong
Pieter Abbeel
Joseph E. Gonzalez
31
44
0
10 Feb 2023
Chain of Hindsight Aligns Language Models with Feedback
Hao Liu
Carmelo Sferrazza
Pieter Abbeel
ALM
28
117
0
06 Feb 2023
Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning
Thomas Carta
Clément Romac
Thomas Wolf
Sylvain Lamprier
Olivier Sigaud
Pierre-Yves Oudeyer
LM&Ro
LLMAG
25
182
0
06 Feb 2023
Benchmarking Large Language Models for News Summarization
Tianyi Zhang
Faisal Ladhak
Esin Durmus
Percy Liang
Kathleen McKeown
Tatsunori B. Hashimoto
ELM
43
485
0
31 Jan 2023
Direct Preference-based Policy Optimization without Reward Modeling
Gaon An
Junhyeok Lee
Xingdong Zuo
Norio Kosaka
KyungHyun Kim
Hyun Oh Song
OffRL
32
26
0
30 Jan 2023
Truth Machines: Synthesizing Veracity in AI Language Models
Luke Munn
Liam Magee
Vanicka Arora
SyDa
HILM
31
28
0
28 Jan 2023
Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning
Xinyi Wang
Wanrong Zhu
Michael Stephen Saxon
Mark Steyvers
William Yang Wang
BDL
56
92
0
27 Jan 2023
Reinforcement Learning from Diverse Human Preferences
Wanqi Xue
Bo An
Shuicheng Yan
Zhongwen Xu
14
21
0
27 Jan 2023
Theoretical Analysis of Offline Imitation With Supplementary Dataset
Ziniu Li
Tian Xu
Y. Yu
Zhixun Luo
OffRL
35
2
0
27 Jan 2023
Principled Reinforcement Learning with Human Feedback from Pairwise or
K
K
K
-wise Comparisons
Banghua Zhu
Jiantao Jiao
Michael I. Jordan
OffRL
42
183
0
26 Jan 2023
Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal Standards
John J. Nay
ELM
AILaw
29
15
0
24 Jan 2023
How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection
Biyang Guo
Xin Zhang
Ziyuan Wang
Minqi Jiang
Jinran Nie
Yuxuan Ding
Jianwei Yue
Yupeng Wu
DeLMO
ELM
8
584
0
18 Jan 2023
On The Fragility of Learned Reward Functions
Lev McKinney
Yawen Duan
David M. Krueger
Adam Gleave
33
20
0
09 Jan 2023
Iterated Decomposition: Improving Science Q&A by Supervising Reasoning Processes
Justin Reppert
Ben Rachbach
Charlie George
Luke Stebbing
Ju-Seung Byun
Maggie Appleton
Andreas Stuhlmuller
ReLM
LRM
43
17
0
04 Jan 2023
Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits
Ruibo Liu
Chenyan Jia
Ge Zhang
Ziyu Zhuang
Tony X. Liu
Soroush Vosoughi
99
35
0
01 Jan 2023
Inclusive Artificial Intelligence
Dilip Arumugam
Shi Dong
Benjamin Van Roy
52
1
0
24 Dec 2022
Methodological reflections for AI alignment research using human feedback
Thilo Hagendorff
Sarah Fabi
21
6
0
22 Dec 2022
Critic-Guided Decoding for Controlled Text Generation
Minbeom Kim
Hwanhee Lee
Kang Min Yoo
Joonsuk Park
Hwaran Lee
Kyomin Jung
39
35
0
21 Dec 2022
Generating Multiple-Length Summaries via Reinforcement Learning for Unsupervised Sentence Summarization
Dongmin Hyun
Xiting Wang
Chanyoung Park
Xing Xie
Hwanjo Yu
19
7
0
21 Dec 2022
JASMINE: Arabic GPT Models for Few-Shot Learning
El Moatez Billah Nagoudi
Muhammad Abdul-Mageed
AbdelRahim Elmadany
Alcides Alcoba Inciarte
Md. Tawkat Islam Khondaker
33
7
0
21 Dec 2022
True Detective: A Deep Abductive Reasoning Benchmark Undoable for GPT-3 and Challenging for GPT-4
Maksym Del
Mark Fishel
RALM
ELM
ReLM
LRM
19
15
0
20 Dec 2022
On Improving Summarization Factual Consistency from Natural Language Feedback
Yixin Liu
Budhaditya Deb
Milagro Teruel
Aaron L Halfaker
Dragomir R. Radev
Ahmed Hassan Awadallah
HILM
29
35
0
20 Dec 2022
Human-in-the-loop Abstractive Dialogue Summarization
Jiaao Chen
Mohan Dodda
Diyi Yang
28
10
0
19 Dec 2022
Evaluating Human-Language Model Interaction
Mina Lee
Megha Srivastava
Amelia Hardy
John Thickstun
Esin Durmus
...
Hancheng Cao
Tony Lee
Rishi Bommasani
Michael S. Bernstein
Percy Liang
LM&MA
ALM
58
100
0
19 Dec 2022
Previous
1
2
3
...
25
26
27
28
29
Next