ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.03896
  4. Cited By
Consequences of Misaligned AI

Consequences of Misaligned AI

7 February 2021
Simon Zhuang
Dylan Hadfield-Menell
ArXivPDFHTML

Papers citing "Consequences of Misaligned AI"

24 / 24 papers shown
Title
Can Machine Learning Agents Deal with Hard Choices?
Can Machine Learning Agents Deal with Hard Choices?
Kangyu Wang
264
0
0
18 Apr 2025
Can Generative AI be Egalitarian?
Can Generative AI be Egalitarian?
Philip G. Feldman
James R. Foulds
Shimei Pan
65
0
0
20 Jan 2025
Learning to Assist Humans without Inferring Rewards
Learning to Assist Humans without Inferring Rewards
Vivek Myers
Evan Ellis
Sergey Levine
Benjamin Eysenbach
Anca Dragan
48
3
0
17 Jan 2025
Measuring Error Alignment for Decision-Making Systems
Measuring Error Alignment for Decision-Making Systems
Binxia Xu
Antonis Bikakis
Daniel Onah
A. Vlachidis
Luke Dickens
41
0
0
03 Jan 2025
RL, but don't do anything I wouldn't do
RL, but don't do anything I wouldn't do
Michael K. Cohen
Marcus Hutter
Yoshua Bengio
Stuart J. Russell
OffRL
37
2
0
08 Oct 2024
Catastrophic Goodhart: regularizing RLHF with KL divergence does not
  mitigate heavy-tailed reward misspecification
Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification
Thomas Kwa
Drake Thomas
Adrià Garriga-Alonso
41
1
0
19 Jul 2024
Offline Regularised Reinforcement Learning for Large Language Models
  Alignment
Offline Regularised Reinforcement Learning for Large Language Models Alignment
Pierre Harvey Richemond
Yunhao Tang
Daniel Guo
Daniele Calandriello
M. G. Azar
...
Gil Shamir
Rishabh Joshi
Tianqi Liu
Rémi Munos
Bilal Piot
OffRL
46
24
0
29 May 2024
On the Essence and Prospect: An Investigation of Alignment Approaches
  for Big Models
On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Xinpeng Wang
Shitong Duan
Xiaoyuan Yi
Jing Yao
Shanlin Zhou
Zhihua Wei
Peng Zhang
Dongkuan Xu
Maosong Sun
Xing Xie
OffRL
47
16
0
07 Mar 2024
Distributional Preference Learning: Understanding and Accounting for
  Hidden Context in RLHF
Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF
Anand Siththaranjan
Cassidy Laidlaw
Dylan Hadfield-Menell
34
58
0
13 Dec 2023
A Review of the Evidence for Existential Risk from AI via Misaligned
  Power-Seeking
A Review of the Evidence for Existential Risk from AI via Misaligned Power-Seeking
Rose Hadshar
26
6
0
27 Oct 2023
Active teacher selection for reinforcement learning from human feedback
Active teacher selection for reinforcement learning from human feedback
Rachel Freedman
Justin Svegliato
K. H. Wray
Stuart J. Russell
31
6
0
23 Oct 2023
Automatic Pair Construction for Contrastive Post-training
Automatic Pair Construction for Contrastive Post-training
Canwen Xu
Corby Rosset
Ethan C. Chau
Luciano Del Corro
Shweti Mahajan
Julian McAuley
Jennifer Neville
Ahmed Hassan Awadallah
Nikhil Rao
ALM
27
4
0
03 Oct 2023
User Experience Design Professionals' Perceptions of Generative
  Artificial Intelligence
User Experience Design Professionals' Perceptions of Generative Artificial Intelligence
Jie Li
Hancheng Cao
Laura Lin
Youyang Hou
Ruihao Zhu
Abdallah El Ali
44
50
0
26 Sep 2023
Benchmarks for Detecting Measurement Tampering
Benchmarks for Detecting Measurement Tampering
Fabien Roger
Ryan Greenblatt
Max Nadeau
Buck Shlegeris
Nate Thomas
33
2
0
29 Aug 2023
VisAlign: Dataset for Measuring the Degree of Alignment between AI and
  Humans in Visual Perception
VisAlign: Dataset for Measuring the Degree of Alignment between AI and Humans in Visual Perception
Jiyoung Lee
Seung Wook Kim
Seunghyun Won
Joonseok Lee
Marzyeh Ghassemi
James Thorne
Jaeseok Choi
O.-Kil Kwon
Edward Choi
42
1
0
03 Aug 2023
Designing Fiduciary Artificial Intelligence
Designing Fiduciary Artificial Intelligence
Sebastian Benthall
David Shekman
51
4
0
27 Jul 2023
On The Fragility of Learned Reward Functions
On The Fragility of Learned Reward Functions
Lev McKinney
Yawen Duan
David M. Krueger
Adam Gleave
33
20
0
09 Jan 2023
Scaling Laws for Reward Model Overoptimization
Scaling Laws for Reward Model Overoptimization
Leo Gao
John Schulman
Jacob Hilton
ALM
41
493
0
19 Oct 2022
Defining and Characterizing Reward Hacking
Defining and Characterizing Reward Hacking
Joar Skalse
Nikolaus H. R. Howe
Dmitrii Krasheninnikov
David M. Krueger
59
56
0
27 Sep 2022
The Alignment Problem from a Deep Learning Perspective
The Alignment Problem from a Deep Learning Perspective
Richard Ngo
Lawrence Chan
Sören Mindermann
68
183
0
30 Aug 2022
Counterfactual harm
Counterfactual harm
Jonathan G. Richens
R. Beard
Daniel H. Thompson
34
27
0
27 Apr 2022
The Effects of Reward Misspecification: Mapping and Mitigating
  Misaligned Models
The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models
Alexander Pan
Kush S. Bhatia
Jacob Steinhardt
55
172
0
10 Jan 2022
Impossibility Results in AI: A Survey
Impossibility Results in AI: A Survey
Mario Brčič
Roman V. Yampolskiy
29
25
0
01 Sep 2021
Goal Misgeneralization in Deep Reinforcement Learning
Goal Misgeneralization in Deep Reinforcement Learning
L. Langosco
Jack Koch
Lee D. Sharkey
J. Pfau
Laurent Orseau
David M. Krueger
30
78
0
28 May 2021
1