ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.00355
  4. Cited By
Second Thoughts are Best: Learning to Re-Align With Human Values from
  Text Edits

Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits

1 January 2023
Ruibo Liu
Chenyan Jia
Ge Zhang
Ziyu Zhuang
Tony X. Liu
Soroush Vosoughi
ArXivPDFHTML

Papers citing "Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits"

32 / 32 papers shown
Title
Augmented Adversarial Trigger Learning
Augmented Adversarial Trigger Learning
Zhe Wang
Yanjun Qi
63
0
0
16 Mar 2025
Evaluating and Aligning Human Economic Risk Preferences in LLMs
Jiaheng Liu
Yi Yang
Kar Yan Tam
73
0
0
09 Mar 2025
Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits
Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits
Tuhin Chakrabarty
Philippe Laban
C. Wu
55
9
0
22 Sep 2024
TLCR: Token-Level Continuous Reward for Fine-grained Reinforcement
  Learning from Human Feedback
TLCR: Token-Level Continuous Reward for Fine-grained Reinforcement Learning from Human Feedback
Eunseop Yoon
Hee Suk Yoon
Soohwan Eom
Gunsoo Han
D. W. Nam
DaeJin Jo
Kyoung-Woon On
M. Hasegawa-Johnson
Sungwoong Kim
C. Yoo
ALM
43
15
0
23 Jul 2024
A Survey on Human Preference Learning for Large Language Models
A Survey on Human Preference Learning for Large Language Models
Ruili Jiang
Kehai Chen
Xuefeng Bai
Zhixuan He
Juntao Li
Muyun Yang
Tiejun Zhao
Liqiang Nie
Min Zhang
59
8
0
17 Jun 2024
A Survey of Language-Based Communication in Robotics
A Survey of Language-Based Communication in Robotics
William Hunt
Sarvapali D. Ramchurn
Mohammad D. Soorati
LM&Ro
70
12
0
06 Jun 2024
Aligning LLM Agents by Learning Latent Preference from User Edits
Aligning LLM Agents by Learning Latent Preference from User Edits
Ge Gao
Alexey Taymanov
Eduardo Salinas
Paul Mineiro
Dipendra Kumar Misra
LLMAG
42
27
0
23 Apr 2024
RLHF Deciphered: A Critical Analysis of Reinforcement Learning from
  Human Feedback for LLMs
RLHF Deciphered: A Critical Analysis of Reinforcement Learning from Human Feedback for LLMs
Shreyas Chaudhari
Pranjal Aggarwal
Vishvak Murahari
Tanmay Rajpurohit
Ashwin Kalyan
Karthik Narasimhan
Ameet Deshpande
Bruno Castro da Silva
34
35
0
12 Apr 2024
SC-Tune: Unleashing Self-Consistent Referential Comprehension in Large
  Vision Language Models
SC-Tune: Unleashing Self-Consistent Referential Comprehension in Large Vision Language Models
Tongtian Yue
Jie Cheng
Longteng Guo
Xingyuan Dai
Zijia Zhao
Xingjian He
Gang Xiong
Yisheng Lv
Jing Liu
48
9
0
20 Mar 2024
On the Essence and Prospect: An Investigation of Alignment Approaches
  for Big Models
On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Xinpeng Wang
Shitong Duan
Xiaoyuan Yi
Jing Yao
Shanlin Zhou
Zhihua Wei
Peng Zhang
Dongkuan Xu
Maosong Sun
Xing Xie
OffRL
47
16
0
07 Mar 2024
TruthX: Alleviating Hallucinations by Editing Large Language Models in
  Truthful Space
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
Shaolei Zhang
Tian Yu
Yang Feng
HILM
KELM
42
40
0
27 Feb 2024
COPR: Continual Human Preference Learning via Optimal Policy
  Regularization
COPR: Continual Human Preference Learning via Optimal Policy Regularization
Han Zhang
Lin Gui
Yu Lei
Yuanzhao Zhai
Yehong Zhang
...
Hui Wang
Yue Yu
Kam-Fai Wong
Bin Liang
Ruifeng Xu
CLL
42
4
0
22 Feb 2024
CMDAG: A Chinese Metaphor Dataset with Annotated Grounds as CoT for
  Boosting Metaphor Generation
CMDAG: A Chinese Metaphor Dataset with Annotated Grounds as CoT for Boosting Metaphor Generation
Yujie Shao
Xinrong Yao
Xingwei Qu
Chenghua Lin
Shi Wang
Stephen W. Huang
Ge Zhang
Jie Fu
45
5
0
20 Feb 2024
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language
  Model Systems
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language Model Systems
Tianyu Cui
Yanling Wang
Chuanpu Fu
Yong Xiao
Sijia Li
...
Junwu Xiong
Xinyu Kong
Zujie Wen
Ke Xu
Qi Li
63
57
0
11 Jan 2024
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Geyang Guo
Ranchi Zhao
Tianyi Tang
Wayne Xin Zhao
Ji-Rong Wen
ALM
45
27
0
07 Nov 2023
The Past, Present and Better Future of Feedback Learning in Large
  Language Models for Subjective Human Preferences and Values
The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values
Hannah Rose Kirk
Andrew M. Bean
Bertie Vidgen
Paul Röttger
Scott A. Hale
ALM
26
42
0
11 Oct 2023
The Empty Signifier Problem: Towards Clearer Paradigms for
  Operationalising "Alignment" in Large Language Models
The Empty Signifier Problem: Towards Clearer Paradigms for Operationalising "Alignment" in Large Language Models
Hannah Rose Kirk
Bertie Vidgen
Paul Röttger
Scott A. Hale
50
2
0
03 Oct 2023
Large Language Model Alignment: A Survey
Large Language Model Alignment: A Survey
Tianhao Shen
Renren Jin
Yufei Huang
Chuang Liu
Weilong Dong
Zishan Guo
Xinwei Wu
Yan Liu
Deyi Xiong
LM&MA
29
179
0
26 Sep 2023
Subjective Crowd Disagreements for Subjective Data: Uncovering
  Meaningful CrowdOpinion with Population-level Learning
Subjective Crowd Disagreements for Subjective Data: Uncovering Meaningful CrowdOpinion with Population-level Learning
Tharindu Cyril Weerasooriya
Sarah K. K. Luger
Saloni Poddar
Ashiqur R. KhudaBukhsh
Christopher Homan
39
5
0
07 Jul 2023
Personalized Abstractive Summarization by Tri-agent Generation Pipeline
Personalized Abstractive Summarization by Tri-agent Generation Pipeline
Md Aminul Haque Palash
Sourav Saha
Faria Afrin
Pengcheng He
47
4
0
04 May 2023
Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in
  Large Language Models
Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models
Jiashuo Sun
Yi Luo
Yeyun Gong
Chen Lin
Yelong Shen
Jian Guo
Nan Duan
LRM
44
19
0
23 Apr 2023
Personalisation within bounds: A risk taxonomy and policy framework for
  the alignment of large language models with personalised feedback
Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback
Hannah Rose Kirk
Bertie Vidgen
Paul Röttger
Scott A. Hale
46
100
0
09 Mar 2023
Towards Safer Generative Language Models: A Survey on Safety Risks,
  Evaluations, and Improvements
Towards Safer Generative Language Models: A Survey on Safety Risks, Evaluations, and Improvements
Jiawen Deng
Jiale Cheng
Hao Sun
Zhexin Zhang
Minlie Huang
LM&MA
ELM
39
16
0
18 Feb 2023
The Touché23-ValueEval Dataset for Identifying Human Values behind
  Arguments
The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments
Nailia Mirzakhmedova
Johannes Kiesel
Milad Alshomary
Maximilian Heinrich
Nicolas Handke
...
Mohammad Ali Sadraei
Ehsaneddin Asgari
Lea Kawaletz
Henning Wachsmuth
Benno Stein
36
38
0
31 Jan 2023
Learning to Model Editing Processes
Learning to Model Editing Processes
Machel Reid
Graham Neubig
KELM
BDL
116
35
0
24 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
408
12,150
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
447
8,699
0
28 Jan 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
221
1,664
0
15 Oct 2021
Unsolved Problems in ML Safety
Unsolved Problems in ML Safety
Dan Hendrycks
Nicholas Carlini
John Schulman
Jacob Steinhardt
186
278
0
28 Sep 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
  Bias in NLP
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
265
374
0
28 Feb 2021
Text Editing by Command
Text Editing by Command
Felix Faltings
Michel Galley
Gerold Hintz
Chris Brockett
Chris Quirk
Jianfeng Gao
Bill Dolan
KELM
152
37
0
24 Oct 2020
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
301
1,620
0
18 Sep 2019
1