ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.01540
  4. Cited By
Augmented Utilitarianism for AGI Safety

Augmented Utilitarianism for AGI Safety

2 April 2019
Nadisha-Marie Aliman
L. Kester
ArXiv (abs)PDFHTML

Papers citing "Augmented Utilitarianism for AGI Safety"

3 / 3 papers shown
Title
Personal Universes: A Solution to the Multi-Agent Value Alignment
  Problem
Personal Universes: A Solution to the Multi-Agent Value Alignment Problem
Roman V. Yampolskiy
70
13
0
01 Jan 2019
Impossibility and Uncertainty Theorems in AI Value Alignment (or why
  your AGI should not have a utility function)
Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)
P. Eckersley
53
46
0
31 Dec 2018
AGI Safety Literature Review
AGI Safety Literature Review
Tom Everitt
G. Lea
Marcus Hutter
AI4CE
67
116
0
03 May 2018
1