ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.07273
  4. Cited By
An Inductive Synthesis Framework for Verifiable Reinforcement Learning

An Inductive Synthesis Framework for Verifiable Reinforcement Learning

16 July 2019
He Zhu
Zikang Xiong
Stephen Magill
Suresh Jagannathan
ArXivPDFHTML

Papers citing "An Inductive Synthesis Framework for Verifiable Reinforcement Learning"

19 / 19 papers shown
Title
Inductive Generalization in Reinforcement Learning from Specifications
Inductive Generalization in Reinforcement Learning from Specifications
Vignesh Subramanian
Rohit Kushwah
Subhajit Roy
Suguman Bansal
OffRL
41
0
0
05 Jun 2024
Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search
Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search
Max Liu
Chan-Hung Yu
Wei-Hsu Lee
Cheng-Wei Hung
Yen-Chun Chen
Shao-Hua Sun
55
4
0
26 May 2024
Probabilistic Model Checking of Stochastic Reinforcement Learning
  Policies
Probabilistic Model Checking of Stochastic Reinforcement Learning Policies
Dennis Gross
Helge Spieker
OffRL
30
2
0
27 Mar 2024
Guiding Safe Exploration with Weakest Preconditions
Guiding Safe Exploration with Weakest Preconditions
Greg Anderson
Swarat Chaudhuri
Işıl Dillig
38
6
0
28 Sep 2022
MSVIPER: Improved Policy Distillation for Reinforcement-Learning-Based
  Robot Navigation
MSVIPER: Improved Policy Distillation for Reinforcement-Learning-Based Robot Navigation
Aaron M. Roth
Jing Liang
Ram D. Sriram
Elham Tabassi
Tianyi Zhou
34
1
0
19 Sep 2022
GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic
  Synthesis
GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis
Yushi Cao
Zhiming Li
Tianpei Yang
Hao Zhang
Yan Zheng
Yi Li
Jianye Hao
Yang Liu
NAI
38
16
0
27 May 2022
A Review of Safe Reinforcement Learning: Methods, Theory and
  Applications
A Review of Safe Reinforcement Learning: Methods, Theory and Applications
Shangding Gu
Longyu Yang
Yali Du
Guang Chen
Florian Walter
Jun Wang
Alois C. Knoll
OffRL
AI4TS
117
241
0
20 May 2022
Programmatic Reward Design by Example
Programmatic Reward Design by Example
Weichao Zhou
Wenchao Li
34
15
0
14 Dec 2021
A Survey on AI Assurance
A Survey on AI Assurance
Feras A. Batarseh
Laura J. Freeman
31
65
0
15 Nov 2021
Learning Density Distribution of Reachable States for Autonomous Systems
Learning Density Distribution of Reachable States for Autonomous Systems
Yue Meng
Dawei Sun
Zeng Qiu
Md Tawhid Bin Waez
Chuchu Fan
82
19
0
14 Sep 2021
Learning to Synthesize Programs as Interpretable and Generalizable
  Policies
Learning to Synthesize Programs as Interpretable and Generalizable Policies
Dweep Trivedi
Jesse Zhang
Shao-Hua Sun
Joseph J. Lim
NAI
24
72
0
31 Aug 2021
Self-Correcting Neural Networks For Safe Classification
Self-Correcting Neural Networks For Safe Classification
Klas Leino
Aymeric Fromherz
Ravi Mangal
Matt Fredrikson
Bryan Parno
C. Păsăreanu
32
4
0
23 Jul 2021
Scalable Synthesis of Verified Controllers in Deep Reinforcement
  Learning
Scalable Synthesis of Verified Controllers in Deep Reinforcement Learning
Zikang Xiong
Suresh Jagannathan
32
6
0
20 Apr 2021
NNV: The Neural Network Verification Tool for Deep Neural Networks and
  Learning-Enabled Cyber-Physical Systems
NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems
Hoang-Dung Tran
Xiaodong Yang
Diego Manzanas Lopez
Patrick Musau
L. V. Nguyen
Weiming Xiang
Stanley Bak
Taylor T. Johnson
34
239
0
12 Apr 2020
ART: Abstraction Refinement-Guided Training for Provably Correct Neural
  Networks
ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks
Xuankang Lin
He Zhu
R. Samanta
Suresh Jagannathan
AAML
27
28
0
17 Jul 2019
MoËT: Mixture of Expert Trees and its Application to Verifiable
  Reinforcement Learning
MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning
Marko Vasic
Andrija Petrović
Kaiyuan Wang
Mladen Nikolic
Rishabh Singh
S. Khurshid
OffRL
MoE
20
23
0
16 Jun 2019
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
251
1,842
0
03 Feb 2017
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
183
933
0
21 Oct 2016
Safe Exploration in Markov Decision Processes
Safe Exploration in Markov Decision Processes
T. Moldovan
Pieter Abbeel
78
308
0
22 May 2012
1