ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.14115
  4. Cited By
Certifying Safety in Reinforcement Learning under Adversarial
  Perturbation Attacks

Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks

28 December 2022
Junlin Wu
Hussein Sibai
Yevgeniy Vorobeychik
    AAML
ArXivPDFHTML

Papers citing "Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks"

2 / 2 papers shown
Title
Learning Interpretable Policies in Hindsight-Observable POMDPs through
  Partially Supervised Reinforcement Learning
Learning Interpretable Policies in Hindsight-Observable POMDPs through Partially Supervised Reinforcement Learning
Michael Lanier
Ying Xu
Nathan Jacobs
Chongjie Zhang
Yevgeniy Vorobeychik
21
2
0
14 Feb 2024
Deep Reinforcement Learning for Autonomous Driving: A Survey
Deep Reinforcement Learning for Autonomous Driving: A Survey
B. R. Kiran
Ibrahim Sobh
V. Talpaert
Patrick Mannion
A. A. Sallab
S. Yogamani
P. Pérez
165
1,632
0
02 Feb 2020
1