ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.05862
  4. Cited By
Training a Helpful and Harmless Assistant with Reinforcement Learning
  from Human Feedback

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

12 April 2022
Yuntao Bai
Andy Jones
Kamal Ndousse
Amanda Askell
Anna Chen
Nova Dassarma
Dawn Drain
Stanislav Fort
Deep Ganguli
T. Henighan
Nicholas Joseph
Saurav Kadavath
John Kernion
Tom Conerly
S. E. Showk
Nelson Elhage
Zac Hatfield-Dodds
Danny Hernandez
Tristan Hume
Scott R. Johnston
Shauna Kravec
Liane Lovitt
Neel Nanda
Catherine Olsson
Dario Amodei
Tom B. Brown
Jack Clark
Sam McCandlish
C. Olah
Benjamin Mann
Jared Kaplan
ArXiv (abs)PDFHTML

Papers citing "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"

50 / 655 papers shown
Title
Transforming and Combining Rewards for Aligning Large Language Models
Transforming and Combining Rewards for Aligning Large Language Models
Zihao Wang
Chirag Nagpal
Jonathan Berant
Jacob Eisenstein
Alex DÁmour
Oluwasanmi Koyejo
Victor Veitch
91
16
0
01 Feb 2024
The Information of Large Language Model Geometry
The Information of Large Language Model Geometry
Zhiquan Tan
Chenghai Li
Weiran Huang
55
2
0
01 Feb 2024
Tradeoffs Between Alignment and Helpfulness in Language Models with Steering Methods
Tradeoffs Between Alignment and Helpfulness in Language Models with Steering Methods
Yotam Wolf
Noam Wies
Dorin Shteyman
Binyamin Rothberg
Yoav Levine
Amnon Shashua
LLMSV
143
14
0
29 Jan 2024
YODA: Teacher-Student Progressive Learning for Language Models
YODA: Teacher-Student Progressive Learning for Language Models
Jianqiao Lu
Wanjun Zhong
Yufei Wang
Zhijiang Guo
Qi Zhu
...
Baojun Wang
Yasheng Wang
Lifeng Shang
Xin Jiang
Qun Liu
LRM
104
7
0
28 Jan 2024
ARGS: Alignment as Reward-Guided Search
ARGS: Alignment as Reward-Guided Search
Maxim Khanov
Jirayu Burapacheep
Yixuan Li
130
62
0
23 Jan 2024
Improving Machine Translation with Human Feedback: An Exploration of
  Quality Estimation as a Reward Model
Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model
Zhiwei He
Xing Wang
Wenxiang Jiao
Zhuosheng Zhang
Rui Wang
Shuming Shi
Zhaopeng Tu
ALM
102
27
0
23 Jan 2024
MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning
MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning
Chenyu Wang
Weixin Luo
Qianyu Chen
Haonan Mai
Jindi Guo
Sixun Dong
Xiaohua Xuan
MLLMLLMAG
155
20
0
19 Jan 2024
Self-Rewarding Language Models
Self-Rewarding Language Models
Weizhe Yuan
Richard Yuanzhe Pang
Kyunghyun Cho
Xian Li
Sainbayar Sukhbaatar
Jing Xu
Jason Weston
ReLMSyDaALMLRM
414
340
0
18 Jan 2024
Large Language Models are Null-Shot Learners
Large Language Models are Null-Shot Learners
Pittawat Taveekitworachai
Febri Abdullah
R. Thawonmas
LRM
46
2
0
16 Jan 2024
Contrastive Perplexity for Controlled Generation: An Application in Detoxifying Large Language Models
Contrastive Perplexity for Controlled Generation: An Application in Detoxifying Large Language Models
T. Klein
Moin Nabi
77
1
0
16 Jan 2024
Beyond Sparse Rewards: Enhancing Reinforcement Learning with Language
  Model Critique in Text Generation
Beyond Sparse Rewards: Enhancing Reinforcement Learning with Language Model Critique in Text Generation
Meng Cao
Lei Shu
Lei Yu
Yun Zhu
Nevan Wichers
Yinxiao Liu
Lei Meng
OffRLALM
64
7
0
14 Jan 2024
Theoretical guarantees on the best-of-n alignment policy
Theoretical guarantees on the best-of-n alignment policy
Ahmad Beirami
Alekh Agarwal
Jonathan Berant
Alex DÁmour
Jacob Eisenstein
Chirag Nagpal
A. Suresh
129
61
0
03 Jan 2024
Bypassing the Safety Training of Open-Source LLMs with Priming Attacks
Bypassing the Safety Training of Open-Source LLMs with Priming Attacks
Jason Vega
Isha Chaudhary
Changming Xu
Gagandeep Singh
AAML
92
24
0
19 Dec 2023
An Invitation to Deep Reinforcement Learning
An Invitation to Deep Reinforcement Learning
Bernhard Jaeger
Andreas Geiger
OffRLOOD
197
5
0
13 Dec 2023
Dr. Jekyll and Mr. Hyde: Two Faces of LLMs
Dr. Jekyll and Mr. Hyde: Two Faces of LLMs
Matteo Gioele Collu
Tom Janssen-Groesbeek
Stefanos Koffas
Mauro Conti
S. Picek
86
1
0
06 Dec 2023
ULMA: Unified Language Model Alignment with Human Demonstration and
  Point-wise Preference
ULMA: Unified Language Model Alignment with Human Demonstration and Point-wise Preference
Tianchi Cai
Xierui Song
Jiyan Jiang
Fei Teng
Jinjie Gu
Guannan Zhang
ALM
94
5
0
05 Dec 2023
CDEval: A Benchmark for Measuring the Cultural Dimensions of Large
  Language Models
CDEval: A Benchmark for Measuring the Cultural Dimensions of Large Language Models
Yuhang Wang
Yanxu Zhu
Chao Kong
Shuyu Wei
Xiaoyuan Yi
Xing Xie
Jitao Sang
ALMVLMELM
66
8
0
28 Nov 2023
Using Human Feedback to Fine-tune Diffusion Models without Any Reward
  Model
Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
Kai Yang
Jian Tao
Jiafei Lyu
Chunjiang Ge
Jiaxin Chen
Qimai Li
Weihan Shen
Xiaolong Zhu
Xiu Li
EGVM
142
109
0
22 Nov 2023
LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms
LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms
Aditi Jha
Sam Havens
Jeremey Dohmann
Alex Trott
Jacob P. Portes
ALM
50
11
0
22 Nov 2023
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with
  Human Feedback in Large Language Models
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models
Jiong Wang
Junlin Wu
Muhao Chen
Yevgeniy Vorobeychik
Chaowei Xiao
AAML
104
15
0
16 Nov 2023
Predicting Text Preference Via Structured Comparative Reasoning
Predicting Text Preference Via Structured Comparative Reasoning
Jing Nathan Yan
Tianqi Liu
Justin T Chiu
Jiaming Shen
Zhen Qin
...
Charumathi Lakshmanan
Y. Kurzion
Alexander M. Rush
Jialu Liu
Michael Bendersky
LRM
98
7
0
14 Nov 2023
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
Yichen Gong
Delong Ran
Jinyuan Liu
Conglei Wang
Tianshuo Cong
Anyu Wang
Sisi Duan
Xiaoyun Wang
MLLM
240
161
0
09 Nov 2023
GRASP: A Disagreement Analysis Framework to Assess Group Associations in
  Perspectives
GRASP: A Disagreement Analysis Framework to Assess Group Associations in Perspectives
Vinodkumar Prabhakaran
Christopher Homan
Lora Aroyo
Aida Mostafazadeh Davani
Alicia Parrish
Alex S. Taylor
Mark Díaz
Ding Wang
Greg Serapio-García
106
9
0
09 Nov 2023
Unveiling Safety Vulnerabilities of Large Language Models
Unveiling Safety Vulnerabilities of Large Language Models
George Kour
Marcel Zalmanovici
Naama Zwerdling
Esther Goldbraich
Ora Nova Fandina
Ateret Anaby-Tavor
Orna Raz
E. Farchi
AAML
88
18
0
07 Nov 2023
Successor Features for Efficient Multisubject Controlled Text Generation
Successor Features for Efficient Multisubject Controlled Text Generation
Mengyao Cao
Mehdi Fatemi
Jackie Chi Kit Cheung
Samira Shabanian
BDL
89
0
0
03 Nov 2023
Leveraging Large Language Models for Collective Decision-Making
Leveraging Large Language Models for Collective Decision-Making
Marios Papachristou
Longqi Yang
Chin-Chia Hsu
LLMAG
93
3
0
03 Nov 2023
The Impact of Preference Agreement in Reinforcement Learning from Human
  Feedback: A Case Study in Summarization
The Impact of Preference Agreement in Reinforcement Learning from Human Feedback: A Case Study in Summarization
Sian Gooding
Hassan Mansoor
42
2
0
02 Nov 2023
Leveraging Word Guessing Games to Assess the Intelligence of Large
  Language Models
Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models
Tian Liang
Zhiwei He
Jen-tse Huang
Wenxuan Wang
Wenxiang Jiao
Rui Wang
Yujiu Yang
Zhaopeng Tu
Shuming Shi
Xing Wang
LLMAG
127
5
0
31 Oct 2023
Automatic Evaluation of Generative Models with Instruction Tuning
Automatic Evaluation of Generative Models with Instruction Tuning
Shuhaib Mehri
Vered Shwartz
ELMALM
58
1
0
30 Oct 2023
MoCa: Measuring Human-Language Model Alignment on Causal and Moral
  Judgment Tasks
MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks
Allen Nie
Yuhui Zhang
Atharva Amdekar
Chris Piech
Tatsunori Hashimoto
Tobias Gerstenberg
84
40
0
30 Oct 2023
Improving Diversity of Demographic Representation in Large Language
  Models via Collective-Critiques and Self-Voting
Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting
Preethi Lahoti
Nicholas Blumm
Xiao Ma
Raghavendra Kotikalapudi
Sahitya Potluri
...
Hansa Srinivasan
Ben Packer
Ahmad Beirami
Alex Beutel
Jilin Chen
114
32
0
25 Oct 2023
Instruct and Extract: Instruction Tuning for On-Demand Information
  Extraction
Instruct and Extract: Instruction Tuning for On-Demand Information Extraction
Yizhu Jiao
Ming Zhong
Sha Li
Ruining Zhao
Siru Ouyang
Heng Ji
Jiawei Han
78
27
0
24 Oct 2023
Did the Neurons Read your Book? Document-level Membership Inference for
  Large Language Models
Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models
Matthieu Meeus
Shubham Jain
Marek Rei
Yves-Alexandre de Montjoye
MIALM
83
33
0
23 Oct 2023
Active teacher selection for reinforcement learning from human feedback
Active teacher selection for reinforcement learning from human feedback
Rachel Freedman
Justin Svegliato
K. H. Wray
Stuart J. Russell
191
6
0
23 Oct 2023
Towards Understanding Sycophancy in Language Models
Towards Understanding Sycophancy in Language Models
Mrinank Sharma
Meg Tong
Tomasz Korbak
David Duvenaud
Amanda Askell
...
Oliver Rausch
Nicholas Schiefer
Da Yan
Miranda Zhang
Ethan Perez
369
247
0
20 Oct 2023
Reliable Academic Conference Question Answering: A Study Based on Large
  Language Model
Reliable Academic Conference Question Answering: A Study Based on Large Language Model
Zhiwei Huang
Long Jin
Junjie Wang
Mingchen Tu
Yin Hua
Zhiqiang Liu
Jiawei Meng
Hua-zeng Chen
Wen Zhang
66
0
0
19 Oct 2023
Group Preference Optimization: Few-Shot Alignment of Large Language
  Models
Group Preference Optimization: Few-Shot Alignment of Large Language Models
Siyan Zhao
John Dang
Aditya Grover
83
30
0
17 Oct 2023
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from
  a Parametric Perspective
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective
Ming Zhong
Chenxin An
Weizhu Chen
Jiawei Han
Pengcheng He
101
12
0
17 Oct 2023
Compositional preference models for aligning LMs
Compositional preference models for aligning LMs
Dongyoung Go
Tomasz Korbak
Germán Kruszewski
Jos Rozen
Marc Dymetman
95
20
0
17 Oct 2023
InstructTODS: Large Language Models for End-to-End Task-Oriented
  Dialogue Systems
InstructTODS: Large Language Models for End-to-End Task-Oriented Dialogue Systems
Willy Chung
Samuel Cahyawijaya
Bryan Wilie
Holy Lovenia
Pascale Fung
82
6
0
13 Oct 2023
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
Wei Ping
Ming-Yu Liu
Lawrence C. McAfee
Peng Xu
Bo Li
Mohammad Shoeybi
Bryan Catanzaro
RALM
118
54
0
11 Oct 2023
The Past, Present and Better Future of Feedback Learning in Large
  Language Models for Subjective Human Preferences and Values
The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values
Hannah Rose Kirk
Andrew M. Bean
Bertie Vidgen
Paul Röttger
Scott A. Hale
ALM
117
50
0
11 Oct 2023
Constructive Large Language Models Alignment with Diverse Feedback
Constructive Large Language Models Alignment with Diverse Feedback
Tianshu Yu
Ting-En Lin
Yuchuan Wu
Min Yang
Fei Huang
Yongbin Li
ALM
104
9
0
10 Oct 2023
Large Language Models for Spatial Trajectory Patterns Mining
Large Language Models for Spatial Trajectory Patterns Mining
Zhengwu Zhang
Hossein Amiri
Zhenke Liu
Andreas Züfle
Liang Zhao
78
20
0
07 Oct 2023
A Long Way to Go: Investigating Length Correlations in RLHF
A Long Way to Go: Investigating Length Correlations in RLHF
Prasann Singhal
Tanya Goyal
Jiacheng Xu
Greg Durrett
160
161
0
05 Oct 2023
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
Chang Gao
Wenxuan Zhang
Guizhen Chen
Wai Lam
224
6
0
04 Oct 2023
Reward Model Ensembles Help Mitigate Overoptimization
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste
Usman Anwar
Robert Kirk
David M. Krueger
NoLaALM
118
139
0
04 Oct 2023
Reinforcement Learning from Automatic Feedback for High-Quality Unit Test Generation
Reinforcement Learning from Automatic Feedback for High-Quality Unit Test Generation
Benjamin Steenhoek
Michele Tufano
Neel Sundaresan
Alexey Svyatkovskiy
OffRLALM
153
22
0
03 Oct 2023
Automatic Pair Construction for Contrastive Post-training
Automatic Pair Construction for Contrastive Post-training
Canwen Xu
Corby Rosset
Ethan C. Chau
Luciano Del Corro
Shweti Mahajan
Julian McAuley
Jennifer Neville
Ahmed Hassan Awadallah
Nikhil Rao
ALM
67
4
0
03 Oct 2023
Dimensions of Disagreement: Unpacking Divergence and Misalignment in
  Cognitive Science and Artificial Intelligence
Dimensions of Disagreement: Unpacking Divergence and Misalignment in Cognitive Science and Artificial Intelligence
Kerem Oktar
Ilia Sucholutsky
Tania Lombrozo
Thomas Griffiths
AI4CE
279
3
0
03 Oct 2023
Previous
123...1011121314
Next