Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.19007
Cited By
Behavior Alignment via Reward Function Optimization
29 October 2023
Dhawal Gupta
Yash Chandak
Scott M. Jordan
Philip S. Thomas
Bruno Castro da Silva
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Behavior Alignment via Reward Function Optimization"
4 / 4 papers shown
Title
ELEMENTAL: Interactive Learning from Demonstrations and Vision-Language Models for Reward Design in Robotics
Letian Chen
Nina Moorman
Matthew C. Gombolay
OffRL
LM&Ro
95
0
0
27 Nov 2024
Highly Efficient Self-Adaptive Reward Shaping for Reinforcement Learning
Haozhe Ma
Zhengding Luo
Thanh Vinh Vo
Kuankuan Sima
Tze-Yun Leong
34
5
0
06 Aug 2024
Bilevel reinforcement learning via the development of hyper-gradient without lower-level convexity
Yan Yang
Bin Gao
Ya-xiang Yuan
46
2
0
30 May 2024
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
359
11,684
0
09 Mar 2017
1