ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6572
  4. Cited By
Explaining and Harnessing Adversarial Examples
v1v2v3 (latest)

Explaining and Harnessing Adversarial Examples

20 December 2014
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
    AAMLGAN
ArXiv (abs)PDFHTML

Papers citing "Explaining and Harnessing Adversarial Examples"

50 / 8,361 papers shown
Title
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural
  Network against Adversarial Attacks
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks
Faiq Khalid
Hassan Ali
Hammad Tariq
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAMLMQ
100
37
0
04 Nov 2018
Adversarial Gain
Adversarial Gain
Peter Henderson
Koustuv Sinha
Nan Rosemary Ke
Joelle Pineau
AAML
63
0
0
04 Nov 2018
Learning to Defend by Learning to Attack
Learning to Defend by Learning to Attack
Haoming Jiang
Zhehui Chen
Yuyang Shi
Bo Dai
T. Zhao
108
22
0
03 Nov 2018
Semidefinite relaxations for certifying robustness to adversarial
  examples
Semidefinite relaxations for certifying robustness to adversarial examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
124
439
0
02 Nov 2018
Efficient Neural Network Robustness Certification with General
  Activation Functions
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
132
766
0
02 Nov 2018
Towards Adversarial Malware Detection: Lessons Learned from PDF-based
  Attacks
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Davide Maiorca
Battista Biggio
Giorgio Giacinto
AAML
80
47
0
02 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
110
244
0
02 Nov 2018
Spectral Signatures in Backdoor Attacks
Spectral Signatures in Backdoor Attacks
Brandon Tran
Jerry Li
Aleksander Madry
AAML
106
801
0
01 Nov 2018
Improving Adversarial Robustness by Encouraging Discriminative Features
Improving Adversarial Robustness by Encouraging Discriminative Features
Chirag Agarwal
Anh Totti Nguyen
Dan Schonfeld
OOD
66
5
0
01 Nov 2018
On the Geometry of Adversarial Examples
On the Geometry of Adversarial Examples
Marc Khoury
Dylan Hadfield-Menell
AAML
86
79
0
01 Nov 2018
Excessive Invariance Causes Adversarial Vulnerability
Excessive Invariance Causes Adversarial Vulnerability
J. Jacobsen
Jens Behrmann
R. Zemel
Matthias Bethge
AAML
127
167
0
01 Nov 2018
When Not to Classify: Detection of Reverse Engineering Attacks on DNN
  Image Classifiers
When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers
Yujia Wang
David J. Miller
M. Schaar
AAML
41
9
0
31 Oct 2018
Data Poisoning Attack against Unsupervised Node Embedding Methods
Data Poisoning Attack against Unsupervised Node Embedding Methods
Mingjie Sun
Jian Tang
Huichen Li
Yue Liu
Chaowei Xiao
Yao-Liang Chen
Basel Alomair
GNNAAML
50
67
0
30 Oct 2018
On the Effectiveness of Interval Bound Propagation for Training
  Verifiably Robust Models
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Sven Gowal
Krishnamurthy Dvijotham
Robert Stanforth
Rudy Bunel
Chongli Qin
J. Uesato
Relja Arandjelović
Timothy A. Mann
Pushmeet Kohli
AAML
109
559
0
30 Oct 2018
Improved Network Robustness with Adversary Critic
Improved Network Robustness with Adversary Critic
Alexander Matyasko
Lap-Pui Chau
AAML
50
14
0
30 Oct 2018
Adversarial Risk and Robustness: General Definitions and Implications
  for the Uniform Distribution
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution
Dimitrios I. Diochnos
Saeed Mahloujifar
Mohammad Mahmoody
AAML
58
72
0
29 Oct 2018
Adversarial Attacks on Stochastic Bandits
Adversarial Attacks on Stochastic Bandits
Kwang-Sung Jun
Lihong Li
Yuzhe Ma
Xiaojin Zhu
AAML
373
124
0
29 Oct 2018
Logit Pairing Methods Can Fool Gradient-Based Attacks
Logit Pairing Methods Can Fool Gradient-Based Attacks
Marius Mosbach
Maksym Andriushchenko
T. A. Trost
Matthias Hein
Dietrich Klakow
AAML
68
83
0
29 Oct 2018
Failing Loudly: An Empirical Study of Methods for Detecting Dataset
  Shift
Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift
Stephan Rabanser
Stephan Günnemann
Zachary Chase Lipton
107
372
0
29 Oct 2018
Rademacher Complexity for Adversarially Robust Generalization
Rademacher Complexity for Adversarially Robust Generalization
Dong Yin
Kannan Ramchandran
Peter L. Bartlett
AAML
107
261
0
29 Oct 2018
Robust Audio Adversarial Example for a Physical Attack
Robust Audio Adversarial Example for a Physical Attack
Hiromu Yakura
Jun Sakuma
AAML
88
192
0
28 Oct 2018
Towards Robust Deep Neural Networks
Towards Robust Deep Neural Networks
Timothy E. Wang
Jack Gu
D. Mehta
Xiaojun Zhao
Edgar A. Bernal
OOD
109
11
0
27 Oct 2018
Attack Graph Convolutional Networks by Adding Fake Nodes
Attack Graph Convolutional Networks by Adding Fake Nodes
Xiaoyun Wang
Minhao Cheng
Joe Eaton
Cho-Jui Hsieh
S. F. Wu
AAMLGNN
118
78
0
25 Oct 2018
Interpreting Black Box Predictions using Fisher Kernels
Interpreting Black Box Predictions using Fisher Kernels
Rajiv Khanna
Been Kim
Joydeep Ghosh
Oluwasanmi Koyejo
FAtt
94
104
0
23 Oct 2018
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial
  Examples Against Gradient Obfuscation Defenses
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses
Mohammad J. Hashemi
Greg Cusack
Eric Keller
AAMLSILM
51
8
0
23 Oct 2018
The Faults in Our Pi Stars: Security Issues and Open Challenges in Deep
  Reinforcement Learning
The Faults in Our Pi Stars: Security Issues and Open Challenges in Deep Reinforcement Learning
Vahid Behzadan
Arslan Munir
80
27
0
23 Oct 2018
One Bit Matters: Understanding Adversarial Examples as the Abuse of
  Redundancy
One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy
Jingkang Wang
R. Jia
Gerald Friedland
Yangqiu Song
C. Spanos
AAML
42
4
0
23 Oct 2018
What can AI do for me: Evaluating Machine Learning Interpretations in
  Cooperative Play
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
82
130
0
23 Oct 2018
Sparse DNNs with Improved Adversarial Robustness
Sparse DNNs with Improved Adversarial Robustness
Yiwen Guo
Chao Zhang
Changshui Zhang
Yurong Chen
AAML
100
154
0
23 Oct 2018
Adversarial Risk Bounds via Function Transformation
Adversarial Risk Bounds via Function Transformation
Justin Khim
Po-Ling Loh
AAML
90
50
0
22 Oct 2018
Compositional Verification for Autonomous Systems with Deep Learning
  Components
Compositional Verification for Autonomous Systems with Deep Learning Components
C. Păsăreanu
D. Gopinath
Huafeng Yu
45
20
0
18 Oct 2018
Exploring Adversarial Examples in Malware Detection
Exploring Adversarial Examples in Malware Detection
Octavian Suciu
Scott E. Coull
Jeffrey Johns
AAML
98
193
0
18 Oct 2018
A Training-based Identification Approach to VIN Adversarial Examples
A Training-based Identification Approach to VIN Adversarial Examples
Yingdi Wang
Wenjia Niu
Tong Chen
Yingxiao Xiang
Jingjing Liu
Gang Li
Jiqiang Liu
AAMLGAN
36
0
0
18 Oct 2018
Provable Robustness of ReLU networks via Maximization of Linear Regions
Provable Robustness of ReLU networks via Maximization of Linear Regions
Francesco Croce
Maksym Andriushchenko
Matthias Hein
94
166
0
17 Oct 2018
Deep Reinforcement Learning
Deep Reinforcement Learning
Yuxi Li
VLMOffRL
194
144
0
15 Oct 2018
An Optimal Control Approach to Sequential Machine Teaching
An Optimal Control Approach to Sequential Machine Teaching
Laurent Lessard
Xuezhou Zhang
Xiaojin Zhu
148
35
0
15 Oct 2018
Enhancing Stock Movement Prediction with Adversarial Training
Enhancing Stock Movement Prediction with Adversarial Training
Fuli Feng
Huimin Chen
Xiangnan He
Ji Ding
Maosong Sun
Tat-Seng Chua
AAMLAIFinOOD
45
4
0
13 Oct 2018
MeshAdv: Adversarial Meshes for Visual Recognition
MeshAdv: Adversarial Meshes for Visual Recognition
Chaowei Xiao
Dawei Yang
Yue Liu
Jia Deng
M. Liu
AAML
63
25
0
11 Oct 2018
Physics-Driven Regularization of Deep Neural Networks for Enhanced
  Engineering Design and Analysis
Physics-Driven Regularization of Deep Neural Networks for Enhanced Engineering Design and Analysis
M. A. Nabian
Hadi Meidani
PINNAI4CE
84
58
0
11 Oct 2018
Characterizing Adversarial Examples Based on Spatial Consistency
  Information for Semantic Segmentation
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
Chaowei Xiao
Ruizhi Deng
Yue Liu
Feng Yu
M. Liu
Basel Alomair
AAML
59
99
0
11 Oct 2018
Unbiased deep solvers for linear parametric PDEs
Unbiased deep solvers for linear parametric PDEs
Marc Sabate Vidales
David Siska
Lukasz Szpruch
OOD
65
8
0
11 Oct 2018
Secure Deep Learning Engineering: A Software Quality Assurance
  Perspective
Secure Deep Learning Engineering: A Software Quality Assurance Perspective
Lei Ma
Felix Juefei Xu
Minhui Xue
Q. Hu
Sen Chen
Yue Liu
Yang Liu
Jianjun Zhao
Jianxiong Yin
Simon See
AAML
80
35
0
10 Oct 2018
The Adversarial Attack and Detection under the Fisher Information Metric
The Adversarial Attack and Detection under the Fisher Information Metric
Chenxiao Zhao
P. T. Fletcher
Mixue Yu
Chaomin Shen
Guixu Zhang
Yaxin Peng
AAML
76
47
0
09 Oct 2018
Efficient Two-Step Adversarial Defense for Deep Neural Networks
Efficient Two-Step Adversarial Defense for Deep Neural Networks
Ting-Jui Chang
Yukun He
Peng Li
AAML
69
11
0
08 Oct 2018
Combinatorial Attacks on Binarized Neural Networks
Combinatorial Attacks on Binarized Neural Networks
Elias Boutros Khalil
Amrita Gupta
B. Dilkina
AAML
89
40
0
08 Oct 2018
Interpretable Convolutional Neural Networks via Feedforward Design
Interpretable Convolutional Neural Networks via Feedforward Design
C.-C. Jay Kuo
Min Zhang
Siyang Li
Jiali Duan
Yueru Chen
88
157
0
05 Oct 2018
Detecting DGA domains with recurrent neural networks and side
  information
Detecting DGA domains with recurrent neural networks and side information
Ryan R. Curtin
Andrew B. Gardner
Slawomir Grzonkowski
A. Kleymenov
Alejandro Mosquera
AAML
58
66
0
04 Oct 2018
WAIC, but Why? Generative Ensembles for Robust Anomaly Detection
WAIC, but Why? Generative Ensembles for Robust Anomaly Detection
Hyun-Jae Choi
Eric Jang
Alexander A. Alemi
OODD
120
82
0
02 Oct 2018
Adversarial Examples - A Complete Characterisation of the Phenomenon
Adversarial Examples - A Complete Characterisation of the Phenomenon
A. Serban
E. Poll
Joost Visser
SILMAAML
102
49
0
02 Oct 2018
Large batch size training of neural networks with adversarial training
  and second-order information
Large batch size training of neural networks with adversarial training and second-order information
Z. Yao
A. Gholami
Daiyaan Arfeen
Richard Liaw
Joseph E. Gonzalez
Kurt Keutzer
Michael W. Mahoney
ODL
96
42
0
02 Oct 2018
Previous
123...152153154...166167168
Next