Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.01540
Cited By
Augmented Utilitarianism for AGI Safety
2 April 2019
Nadisha-Marie Aliman
L. Kester
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Augmented Utilitarianism for AGI Safety"
3 / 3 papers shown
Title
Personal Universes: A Solution to the Multi-Agent Value Alignment Problem
Roman V. Yampolskiy
70
13
0
01 Jan 2019
Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)
P. Eckersley
53
46
0
31 Dec 2018
AGI Safety Literature Review
Tom Everitt
G. Lea
Marcus Hutter
AI4CE
67
116
0
03 May 2018
1