ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.00064
  4. Cited By
Impossibility and Uncertainty Theorems in AI Value Alignment (or why
  your AGI should not have a utility function)

Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)

31 December 2018
P. Eckersley
ArXivPDFHTML

Papers citing "Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)"

10 / 10 papers shown
Title
Value Preferences Estimation and Disambiguation in Hybrid Participatory Systems
Value Preferences Estimation and Disambiguation in Hybrid Participatory Systems
Enrico Liscio
Luciano Cavalcante Siebert
Catholijn M. Jonker
P. Murukannaiah
40
4
0
26 Feb 2024
Designing Fiduciary Artificial Intelligence
Designing Fiduciary Artificial Intelligence
Sebastian Benthall
David Shekman
51
4
0
27 Jul 2023
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges
  during Co-production of Responsible AI Values
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges during Co-production of Responsible AI Values
R. Varanasi
Nitesh Goyal
37
46
0
14 Jul 2023
Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased
Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased
Chao Yu
Jiaxuan Gao
Weiling Liu
Bo Xu
Hao Tang
Jiaqi Yang
Yu Wang
Yi Wu
31
39
0
03 Feb 2023
Impossibility Results in AI: A Survey
Impossibility Results in AI: A Survey
Mario Brčič
Roman V. Yampolskiy
29
25
0
01 Sep 2021
Safe Imitation Learning via Fast Bayesian Reward Inference from
  Preferences
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
Daniel S. Brown
Russell Coleman
R. Srinivasan
S. Niekum
BDL
32
101
0
21 Feb 2020
Unpredictability of AI
Unpredictability of AI
Roman V. Yampolskiy
16
30
0
29 May 2019
Augmented Utilitarianism for AGI Safety
Augmented Utilitarianism for AGI Safety
Nadisha-Marie Aliman
L. Kester
22
8
0
02 Apr 2019
The Ethics of AI Ethics -- An Evaluation of Guidelines
The Ethics of AI Ethics -- An Evaluation of Guidelines
Thilo Hagendorff
AI4TS
28
1,156
0
28 Feb 2019
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,091
0
24 Oct 2016
1