Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.00064
Cited By
Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)
31 December 2018
P. Eckersley
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)"
10 / 10 papers shown
Title
Value Preferences Estimation and Disambiguation in Hybrid Participatory Systems
Enrico Liscio
Luciano Cavalcante Siebert
Catholijn M. Jonker
P. Murukannaiah
40
4
0
26 Feb 2024
Designing Fiduciary Artificial Intelligence
Sebastian Benthall
David Shekman
51
4
0
27 Jul 2023
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges during Co-production of Responsible AI Values
R. Varanasi
Nitesh Goyal
37
46
0
14 Jul 2023
Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased
Chao Yu
Jiaxuan Gao
Weiling Liu
Bo Xu
Hao Tang
Jiaqi Yang
Yu Wang
Yi Wu
31
39
0
03 Feb 2023
Impossibility Results in AI: A Survey
Mario Brčič
Roman V. Yampolskiy
29
25
0
01 Sep 2021
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
Daniel S. Brown
Russell Coleman
R. Srinivasan
S. Niekum
BDL
32
101
0
21 Feb 2020
Unpredictability of AI
Roman V. Yampolskiy
16
30
0
29 May 2019
Augmented Utilitarianism for AGI Safety
Nadisha-Marie Aliman
L. Kester
22
8
0
02 Apr 2019
The Ethics of AI Ethics -- An Evaluation of Guidelines
Thilo Hagendorff
AI4TS
28
1,156
0
28 Feb 2019
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,091
0
24 Oct 2016
1