ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.10662
  4. Cited By
ART: Abstraction Refinement-Guided Training for Provably Correct Neural
  Networks
v1v2v3 (latest)

ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks

17 July 2019
Xuankang Lin
He Zhu
R. Samanta
Suresh Jagannathan
    AAML
ArXiv (abs)PDFHTML

Papers citing "ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks"

37 / 37 papers shown
Title
Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability Analysis
Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability Analysis
Long Kiu Chung
Shreyas Kousik
501
0
0
22 Jan 2025
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
529
42,559
0
03 Dec 2019
An Inductive Synthesis Framework for Verifiable Reinforcement Learning
An Inductive Synthesis Framework for Verifiable Reinforcement Learning
He Zhu
Zikang Xiong
Stephen Magill
Suresh Jagannathan
60
97
0
16 Jul 2019
Optimization and Abstraction: A Synergistic Approach for Analyzing
  Neural Network Robustness
Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness
Greg Anderson
Shankara Pailoor
Işıl Dillig
Swarat Chaudhuri
AAML
78
101
0
22 Apr 2019
Verification of Non-Linear Specifications for Neural Networks
Verification of Non-Linear Specifications for Neural Networks
Chongli Qin
Krishnamurthy Dvijotham
Dvijotham
Brendan O'Donoghue
Rudy Bunel
Robert Stanforth
Sven Gowal
J. Uesato
G. Swirszcz
Pushmeet Kohli
AAML
60
44
0
25 Feb 2019
Semidefinite relaxations for certifying robustness to adversarial
  examples
Semidefinite relaxations for certifying robustness to adversarial examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
100
439
0
02 Nov 2018
Efficient Neural Network Robustness Certification with General
  Activation Functions
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
96
764
0
02 Nov 2018
On the Effectiveness of Interval Bound Propagation for Training
  Verifiably Robust Models
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Sven Gowal
Krishnamurthy Dvijotham
Robert Stanforth
Rudy Bunel
Chongli Qin
J. Uesato
Relja Arandjelović
Timothy A. Mann
Pushmeet Kohli
AAML
84
558
0
30 Oct 2018
Efficient Formal Safety Analysis of Neural Networks
Efficient Formal Safety Analysis of Neural Networks
Shiqi Wang
Kexin Pei
Justin Whitehouse
Junfeng Yang
Suman Jana
AAML
70
404
0
19 Sep 2018
Training for Faster Adversarial Robustness Verification via Inducing
  ReLU Stability
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
Kai Y. Xiao
Vincent Tjeng
Nur Muhammad (Mahi) Shafiullah
Aleksander Madry
AAMLOOD
41
201
0
09 Sep 2018
Adversarially Regularising Neural NLI Models to Integrate Logical
  Background Knowledge
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge
Pasquale Minervini
Sebastian Riedel
AAMLNAIGAN
59
119
0
26 Aug 2018
Scaling provable adversarial defenses
Scaling provable adversarial defenses
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
AAML
78
450
0
31 May 2018
Training verified learners with learned verifiers
Training verified learners with learned verifiers
Krishnamurthy Dvijotham
Sven Gowal
Robert Stanforth
Relja Arandjelović
Brendan O'Donoghue
J. Uesato
Pushmeet Kohli
OOD
65
169
0
25 May 2018
Formal Security Analysis of Neural Networks using Symbolic Intervals
Formal Security Analysis of Neural Networks using Symbolic Intervals
Shiqi Wang
Kexin Pei
Justin Whitehouse
Junfeng Yang
Suman Jana
AAML
84
478
0
28 Apr 2018
Towards Fast Computation of Certified Robustness for ReLU Networks
Towards Fast Computation of Certified Robustness for ReLU Networks
Tsui-Wei Weng
Huan Zhang
Hongge Chen
Zhao Song
Cho-Jui Hsieh
Duane S. Boning
Inderjit S. Dhillon
Luca Daniel
AAML
108
695
0
25 Apr 2018
A Dual Approach to Scalable Verification of Deep Networks
A Dual Approach to Scalable Verification of Deep Networks
Krishnamurthy Dvijotham
Dvijotham
Robert Stanforth
Sven Gowal
Timothy A. Mann
Pushmeet Kohli
56
399
0
17 Mar 2018
Certified Defenses against Adversarial Examples
Certified Defenses against Adversarial Examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
113
969
0
29 Jan 2018
Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility
  Study
Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study
M. Fischetti
Jason Jo
50
81
0
17 Dec 2017
A Semantic Loss Function for Deep Learning with Symbolic Knowledge
A Semantic Loss Function for Deep Learning with Symbolic Knowledge
Jingyi Xu
Zilu Zhang
Tal Friedman
Yitao Liang
Guy Van den Broeck
97
453
0
29 Nov 2017
Learning Explanatory Rules from Noisy Data
Learning Explanatory Rules from Noisy Data
Richard Evans
Edward Grefenstette
122
487
0
13 Nov 2017
Provable defenses against adversarial examples via the convex outer
  adversarial polytope
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
128
1,504
0
02 Nov 2017
Provably Minimally-Distorted Adversarial Examples
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
73
89
0
29 Sep 2017
Safe Reinforcement Learning via Shielding
Safe Reinforcement Learning via Shielding
Mohammed Alshiekh
Roderick Bloem
Rüdiger Ehlers
Bettina Könighofer
S. Niekum
Ufuk Topcu
82
690
0
29 Aug 2017
An approach to reachability analysis for feed-forward ReLU neural
  networks
An approach to reachability analysis for feed-forward ReLU neural networks
A. Lomuscio
Lalit Maganti
65
359
0
22 Jun 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
315
12,131
0
19 Jun 2017
Imposing Hard Constraints on Deep Networks: Promises and Limitations
Imposing Hard Constraints on Deep Networks: Promises and Limitations
Pablo Márquez-Neila
Mathieu Salzmann
Pascal Fua
PINNUQCV
145
140
0
07 Jun 2017
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
Kexin Pei
Yinzhi Cao
Junfeng Yang
Suman Jana
AAML
102
1,371
0
18 May 2017
Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
Rüdiger Ehlers
104
626
0
03 May 2017
Maximum Resilience of Artificial Neural Networks
Maximum Resilience of Artificial Neural Networks
Chih-Hong Cheng
Georg Nührenberg
Harald Ruess
AAML
115
284
0
28 Apr 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
318
1,874
0
03 Feb 2017
Harnessing Deep Neural Networks with Logic Rules
Harnessing Deep Neural Networks with Logic Rules
Zhiting Hu
Xuezhe Ma
Zhengzhong Liu
Eduard H. Hovy
Eric Xing
AI4CENAI
71
614
0
21 Mar 2016
Distillation as a Defense to Adversarial Perturbations against Deep
  Neural Networks
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
113
3,077
0
14 Nov 2015
Constrained Convolutional Neural Networks for Weakly Supervised
  Segmentation
Constrained Convolutional Neural Networks for Weakly Supervised Segmentation
Deepak Pathak
Philipp Krahenbuhl
Trevor Darrell
SSeg
96
614
0
11 Jun 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.0K
150,312
0
22 Dec 2014
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
282
19,107
0
20 Dec 2014
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Anh Totti Nguyen
J. Yosinski
Jeff Clune
AAML
171
3,274
0
05 Dec 2014
A Reduction of Imitation Learning and Structured Prediction to No-Regret
  Online Learning
A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning
Stéphane Ross
Geoffrey J. Gordon
J. Andrew Bagnell
OffRL
236
3,232
0
02 Nov 2010
1