ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.12904
  4. Cited By
Weak Human Preference Supervision For Deep Reinforcement Learning

Weak Human Preference Supervision For Deep Reinforcement Learning

25 July 2020
Zehong Cao
Kaichiu Wong
Chin-Teng Lin
ArXivPDFHTML

Papers citing "Weak Human Preference Supervision For Deep Reinforcement Learning"

2 / 2 papers shown
Title
DIPPER: Direct Preference Optimization to Accelerate Primitive-Enabled Hierarchical Reinforcement Learning
DIPPER: Direct Preference Optimization to Accelerate Primitive-Enabled Hierarchical Reinforcement Learning
Utsav Singh
Souradip Chakraborty
Wesley A Suttle
Brian M. Sadler
Vinay P. Namboodiri
Amrit Singh Bedi
OffRL
53
0
0
03 Jan 2025
Skill Preferences: Learning to Extract and Execute Robotic Skills from
  Human Feedback
Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback
Xiaofei Wang
Kimin Lee
Kourosh Hakhamaneshi
Pieter Abbeel
Michael Laskin
34
42
0
11 Aug 2021
1