ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.16755
  4. Cited By
Training Language Models with Language Feedback at Scale

Training Language Models with Language Feedback at Scale

28 March 2023
Jérémy Scheurer
Jon Ander Campos
Tomasz Korbak
Jun Shern Chan
Angelica Chen
Kyunghyun Cho
Ethan Perez
    ALM
ArXivPDFHTML

Papers citing "Training Language Models with Language Feedback at Scale"

50 / 102 papers shown
Title
VideoCon: Robust Video-Language Alignment via Contrast Captions
VideoCon: Robust Video-Language Alignment via Contrast Captions
Hritik Bansal
Yonatan Bitton
Idan Szpektor
Kai-Wei Chang
Aditya Grover
40
14
0
15 Nov 2023
A Closer Look at the Self-Verification Abilities of Large Language
  Models in Logical Reasoning
A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning
Ruixin Hong
Hongming Zhang
Xinyu Pang
Dong Yu
Changshui Zhang
LRM
52
23
0
14 Nov 2023
First Tragedy, then Parse: History Repeats Itself in the New Era of
  Large Language Models
First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models
Naomi Saphra
Eve Fleisig
Kyunghyun Cho
Adam Lopez
LRM
27
8
0
08 Nov 2023
This is not a Dataset: A Large Negation Benchmark to Challenge Large
  Language Models
This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models
Iker García-Ferrero
Begoña Altuna
J. Álvez
Itziar Gonzalez-Dios
German Rigau
20
15
0
24 Oct 2023
Teaching Language Models to Self-Improve through Interactive
  Demonstrations
Teaching Language Models to Self-Improve through Interactive Demonstrations
Xiao Yu
Baolin Peng
Michel Galley
Jianfeng Gao
Zhou Yu
LRM
ReLM
35
19
0
20 Oct 2023
Vision-Language Models are Zero-Shot Reward Models for Reinforcement
  Learning
Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning
Juan Rocamonde
Victoriano Montesinos
Elvis Nava
Ethan Perez
David Lindner
VLM
33
76
0
19 Oct 2023
Understanding the Effects of RLHF on LLM Generalisation and Diversity
Understanding the Effects of RLHF on LLM Generalisation and Diversity
Robert Kirk
Ishita Mediratta
Christoforos Nalmpantis
Jelena Luketina
Eric Hambro
Edward Grefenstette
Roberta Raileanu
AI4CE
ALM
115
122
0
10 Oct 2023
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
Chang Gao
Wenxuan Zhang
Guizhen Chen
Wai Lam
55
5
0
04 Oct 2023
Reward Model Ensembles Help Mitigate Overoptimization
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste
Usman Anwar
Robert Kirk
David M. Krueger
NoLa
ALM
25
116
0
04 Oct 2023
SELF: Self-Evolution with Language Feedback
SELF: Self-Evolution with Language Feedback
Jianqiao Lu
Wanjun Zhong
Wenyong Huang
Yufei Wang
Qi Zhu
...
Weichao Wang
Xingshan Zeng
Lifeng Shang
Xin Jiang
Qun Liu
LRM
SyDa
29
6
0
01 Oct 2023
Large Language Model Alignment: A Survey
Large Language Model Alignment: A Survey
Tianhao Shen
Renren Jin
Yufei Huang
Chuang Liu
Weilong Dong
Zishan Guo
Xinwei Wu
Yan Liu
Deyi Xiong
LM&MA
19
176
0
26 Sep 2023
PLMM: Personal Large Language Models on Mobile Devices
PLMM: Personal Large Language Models on Mobile Devices
Yuanhao Gong
21
0
0
26 Sep 2023
ICLEF: In-Context Learning with Expert Feedback for Explainable Style
  Transfer
ICLEF: In-Context Learning with Expert Feedback for Explainable Style Transfer
Arkadiy Saakyan
Smaranda Muresan
23
3
0
15 Sep 2023
Gradient Domain Diffusion Models for Image Synthesis
Gradient Domain Diffusion Models for Image Synthesis
Yuanhao Gong
DiffM
17
5
0
05 Sep 2023
Peering Through Preferences: Unraveling Feedback Acquisition for
  Aligning Large Language Models
Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models
Hritik Bansal
John Dang
Aditya Grover
ALM
35
20
0
30 Aug 2023
Large Language Models Should Ask Clarifying Questions to Increase
  Confidence in Generated Code
Large Language Models Should Ask Clarifying Questions to Increase Confidence in Generated Code
Jiexi Wu
26
3
0
25 Aug 2023
From Instructions to Intrinsic Human Values -- A Survey of Alignment
  Goals for Big Models
From Instructions to Intrinsic Human Values -- A Survey of Alignment Goals for Big Models
Jing Yao
Xiaoyuan Yi
Xiting Wang
Jindong Wang
Xing Xie
ALM
27
42
0
23 Aug 2023
TSSR: A Truncated and Signed Square Root Activation Function for Neural
  Networks
TSSR: A Truncated and Signed Square Root Activation Function for Neural Networks
Yuanhao Gong
19
1
0
09 Aug 2023
Automatically Correcting Large Language Models: Surveying the landscape
  of diverse self-correction strategies
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies
Liangming Pan
Michael Stephen Saxon
Wenda Xu
Deepak Nathani
Xinyi Wang
William Yang Wang
KELM
LRM
47
201
0
06 Aug 2023
STL: A Signed and Truncated Logarithm Activation Function for Neural
  Networks
STL: A Signed and Truncated Logarithm Activation Function for Neural Networks
Yuanhao Gong
LLMSV
16
0
0
31 Jul 2023
Open Problems and Fundamental Limitations of Reinforcement Learning from
  Human Feedback
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Stephen Casper
Xander Davies
Claudia Shi
T. Gilbert
Jérémy Scheurer
...
Erdem Biyik
Anca Dragan
David M. Krueger
Dorsa Sadigh
Dylan Hadfield-Menell
ALM
OffRL
47
473
0
27 Jul 2023
Multilevel Large Language Models for Everyone
Multilevel Large Language Models for Everyone
Yuanhao Gong
19
5
0
25 Jul 2023
Dynamic Large Language Models on Blockchains
Dynamic Large Language Models on Blockchains
Yuanhao Gong
22
11
0
20 Jul 2023
System-Level Natural Language Feedback
System-Level Natural Language Feedback
Weizhe Yuan
Kyunghyun Cho
Jason Weston
38
5
0
23 Jun 2023
Learning to Generate Better Than Your LLM
Learning to Generate Better Than Your LLM
Jonathan D. Chang
Kianté Brantley
Rajkumar Ramamurthy
Dipendra Kumar Misra
Wen Sun
19
41
0
20 Jun 2023
Rewarded soups: towards Pareto-optimal alignment by interpolating
  weights fine-tuned on diverse rewards
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards
Alexandre Ramé
Guillaume Couairon
Mustafa Shukor
Corentin Dancette
Jean-Baptiste Gaya
Laure Soulier
Matthieu Cord
MoMe
35
136
0
07 Jun 2023
Fine-Grained Human Feedback Gives Better Rewards for Language Model
  Training
Fine-Grained Human Feedback Gives Better Rewards for Language Model Training
Zeqiu Wu
Yushi Hu
Weijia Shi
Nouha Dziri
Alane Suhr
Prithviraj Ammanabrolu
Noah A. Smith
Mari Ostendorf
Hannaneh Hajishirzi
ALM
30
304
0
02 Jun 2023
Self-Verification Improves Few-Shot Clinical Information Extraction
Self-Verification Improves Few-Shot Clinical Information Extraction
Zelalem Gero
Chandan Singh
Hao Cheng
Tristan Naumann
Michel Galley
Jianfeng Gao
Hoifung Poon
40
53
0
30 May 2023
Training Socially Aligned Language Models on Simulated Social
  Interactions
Training Socially Aligned Language Models on Simulated Social Interactions
Ruibo Liu
Ruixin Yang
Chenyan Jia
Ge Zhang
Denny Zhou
Andrew M. Dai
Diyi Yang
Soroush Vosoughi
ALM
37
45
0
26 May 2023
Investigating Table-to-Text Generation Capabilities of LLMs in
  Real-World Information Seeking Scenarios
Investigating Table-to-Text Generation Capabilities of LLMs in Real-World Information Seeking Scenarios
Yilun Zhao
Haowei Zhang
Shengyun Si
Linyong Nan
Xiangru Tang
Arman Cohan
LMTD
24
12
0
24 May 2023
Improving Language Models via Plug-and-Play Retrieval Feedback
Improving Language Models via Plug-and-Play Retrieval Feedback
W. Yu
Zhihan Zhang
Zhenwen Liang
Meng Jiang
Ashish Sabharwal
KELM
LRM
28
71
0
23 May 2023
Learning from Mistakes via Cooperative Study Assistant for Large
  Language Models
Learning from Mistakes via Cooperative Study Assistant for Large Language Models
Danqing Wang
Lei Li
32
6
0
23 May 2023
Goal-Driven Explainable Clustering via Language Descriptions
Goal-Driven Explainable Clustering via Language Descriptions
Zihan Wang
Jingbo Shang
Ruiqi Zhong
30
35
0
23 May 2023
Aligning Large Language Models through Synthetic Feedback
Aligning Large Language Models through Synthetic Feedback
Sungdong Kim
Sanghwan Bae
Jamin Shin
Soyoung Kang
Donghyun Kwak
Kang Min Yoo
Minjoon Seo
ALM
SyDa
81
67
0
23 May 2023
AlpacaFarm: A Simulation Framework for Methods that Learn from Human
  Feedback
AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback
Yann Dubois
Xuechen Li
Rohan Taori
Tianyi Zhang
Ishaan Gulrajani
Jimmy Ba
Carlos Guestrin
Percy Liang
Tatsunori B. Hashimoto
ALM
45
542
0
22 May 2023
Continually Improving Extractive QA via Human Feedback
Continually Improving Extractive QA via Human Feedback
Ge Gao
Hung-Ting Chen
Yoav Artzi
Eunsol Choi
26
12
0
21 May 2023
Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
  Language Generation
Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation
Patrick Fernandes
Aman Madaan
Emmy Liu
António Farinhas
Pedro Henrique Martins
...
José G. C. de Souza
Shuyan Zhou
Tongshuang Wu
Graham Neubig
André F. T. Martins
ALM
117
56
0
01 May 2023
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment
Hanze Dong
Wei Xiong
Deepanshu Goyal
Yihan Zhang
Winnie Chow
Rui Pan
Shizhe Diao
Jipeng Zhang
Kashun Shum
Tong Zhang
ALM
18
404
0
13 Apr 2023
Improving Code Generation by Training with Natural Language Feedback
Improving Code Generation by Training with Natural Language Feedback
Angelica Chen
Jérémy Scheurer
Tomasz Korbak
Jon Ander Campos
Jun Shern Chan
Samuel R. Bowman
Kyunghyun Cho
Ethan Perez
SyDa
ALM
AI4CE
36
76
0
28 Mar 2023
Pretraining Language Models with Human Preferences
Pretraining Language Models with Human Preferences
Tomasz Korbak
Kejian Shi
Angelica Chen
Rasika Bhalerao
C. L. Buckley
Jason Phang
Sam Bowman
Ethan Perez
ALM
SyDa
36
207
0
16 Feb 2023
When Life Gives You Lemons, Make Cherryade: Converting Feedback from Bad
  Responses into Good Labels
When Life Gives You Lemons, Make Cherryade: Converting Feedback from Bad Responses into Good Labels
Weiyan Shi
Emily Dinan
Kurt Shuster
Jason Weston
Jing Xu
52
19
0
28 Oct 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
328
11,953
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
389
8,495
0
28 Jan 2022
Text and Code Embeddings by Contrastive Pre-Training
Text and Code Embeddings by Contrastive Pre-Training
Arvind Neelakantan
Tao Xu
Raul Puri
Alec Radford
Jesse Michael Han
...
Tabarak Khan
Toki Sherbakov
Joanne Jang
Peter Welinder
Lilian Weng
SSL
AI4TS
232
422
0
24 Jan 2022
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
279
1,124
0
18 Apr 2021
Interactive Learning from Activity Description
Interactive Learning from Activity Description
Khanh Nguyen
Dipendra Kumar Misra
Robert Schapire
Miroslav Dudík
Patrick Shafto
47
34
0
13 Feb 2021
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
282
170
0
24 Oct 2020
Stanza: A Python Natural Language Processing Toolkit for Many Human
  Languages
Stanza: A Python Natural Language Processing Toolkit for Many Human Languages
Peng Qi
Yuhao Zhang
Yuhui Zhang
Jason Bolton
Christopher D. Manning
AI4TS
213
1,654
0
16 Mar 2020
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
286
1,595
0
18 Sep 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
260
620
0
04 Dec 2018
Previous
123
Next