ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.00012
  4. Cited By
MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced
  Active Learning

MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning

30 December 2021
M. Peschl
Arkady Zgonnikov
F. Oliehoek
Luciano Cavalcante Siebert
ArXivPDFHTML

Papers citing "MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning"

2 / 2 papers shown
Title
How to Find the Exact Pareto Front for Multi-Objective MDPs?
How to Find the Exact Pareto Front for Multi-Objective MDPs?
Yining Li
Peizhong Ju
Ness B. Shroff
139
0
0
21 Oct 2024
ARMCHAIR: integrated inverse reinforcement learning and model predictive
  control for human-robot collaboration
ARMCHAIR: integrated inverse reinforcement learning and model predictive control for human-robot collaboration
Angelo Caregnato-Neto
Luciano Cavalcante Siebert
Arkady Zgonnikov
Marcos Ricardo Omena de Albuquerque Máximo
R. J. Afonso
34
2
0
29 Feb 2024
1