Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2201.00012
Cited By
MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning
30 December 2021
M. Peschl
Arkady Zgonnikov
F. Oliehoek
Luciano Cavalcante Siebert
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning"
2 / 2 papers shown
Title
How to Find the Exact Pareto Front for Multi-Objective MDPs?
Yining Li
Peizhong Ju
Ness B. Shroff
139
0
0
21 Oct 2024
ARMCHAIR: integrated inverse reinforcement learning and model predictive control for human-robot collaboration
Angelo Caregnato-Neto
Luciano Cavalcante Siebert
Arkady Zgonnikov
Marcos Ricardo Omena de Albuquerque Máximo
R. J. Afonso
34
2
0
29 Feb 2024
1