ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.04131
  4. Cited By
Foolbox: A Python toolbox to benchmark the robustness of machine
  learning models

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

13 July 2017
Jonas Rauber
Wieland Brendel
Matthias Bethge
    AAML
ArXivPDFHTML

Papers citing "Foolbox: A Python toolbox to benchmark the robustness of machine learning models"

45 / 95 papers shown
Title
When Explainability Meets Adversarial Learning: Detecting Adversarial
  Examples using SHAP Signatures
When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures
Gil Fidel
Ron Bitton
A. Shabtai
FAtt
GAN
21
119
0
08 Sep 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
56
480
0
03 Jul 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAML
OOD
24
113
0
01 Jul 2019
Defending Against Adversarial Examples with K-Nearest Neighbor
Chawin Sitawarin
David Wagner
AAML
11
29
0
23 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
35
35
0
09 Jun 2019
Label Universal Targeted Attack
Label Universal Targeted Attack
Naveed Akhtar
M. Jalwana
Bennamoun
Ajmal Mian
AAML
25
5
0
27 May 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
27
98
0
25 May 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the
  Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer
  Domain
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
42
18
0
19 May 2019
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
33
159
0
10 May 2019
Test Selection for Deep Learning Systems
Test Selection for Deep Learning Systems
Wei Ma
Mike Papadakis
Anestis Tsakmalis
Maxime Cordy
Yves Le Traon
OOD
23
92
0
30 Apr 2019
Adversarial Defense Through Network Profiling Based Path Extraction
Adversarial Defense Through Network Profiling Based Path Extraction
Yuxian Qiu
Jingwen Leng
Cong Guo
Quan Chen
Chong Li
Minyi Guo
Yuhao Zhu
AAML
24
51
0
17 Apr 2019
Curls & Whey: Boosting Black-Box Adversarial Attacks
Curls & Whey: Boosting Black-Box Adversarial Attacks
Yucheng Shi
Siyu Wang
Yahong Han
AAML
34
116
0
02 Apr 2019
Benchmarking Neural Network Robustness to Common Corruptions and
  Perturbations
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks
Thomas G. Dietterich
OOD
VLM
12
3,386
0
28 Mar 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
31
0
27 Mar 2019
A geometry-inspired decision-based attack
A geometry-inspired decision-based attack
Yujia Liu
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
24
52
0
26 Mar 2019
The LogBarrier adversarial attack: making effective use of decision
  boundary information
The LogBarrier adversarial attack: making effective use of decision boundary information
Chris Finlay
Aram-Alexandre Pooladian
Adam M. Oberman
AAML
39
25
0
25 Mar 2019
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial
  Perturbations
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
Saeid Asgari Taghanaki
Kumar Abhishek
Shekoofeh Azizi
Ghassan Hamarneh
AAML
36
40
0
03 Mar 2019
RED-Attack: Resource Efficient Decision based Attack for Machine
  Learning
RED-Attack: Resource Efficient Decision based Attack for Machine Learning
Faiq Khalid
Hassan Ali
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
36
14
0
29 Jan 2019
Cross-Entropy Loss and Low-Rank Features Have Responsibility for
  Adversarial Examples
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples
Kamil Nar
Orhan Ocal
S. Shankar Sastry
Kannan Ramchandran
AAML
27
54
0
24 Jan 2019
Image Super-Resolution as a Defense Against Adversarial Attacks
Image Super-Resolution as a Defense Against Adversarial Attacks
Aamir Mustafa
Salman H. Khan
Munawar Hayat
Jianbing Shen
Ling Shao
AAML
SupR
32
171
0
07 Jan 2019
Defending Against Universal Perturbations With Shared Adversarial
  Training
Defending Against Universal Perturbations With Shared Adversarial Training
Chaithanya Kumar Mummadi
Thomas Brox
J. H. Metzen
AAML
18
60
0
10 Dec 2018
A randomized gradient-free attack on ReLU networks
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
37
21
0
28 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
27
23
0
06 Nov 2018
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters
Hassan Ali
Faiq Khalid
Hammad Tariq
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
52
14
0
04 Nov 2018
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural
  Network against Adversarial Attacks
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks
Faiq Khalid
Hassan Ali
Hammad Tariq
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
MQ
40
37
0
04 Nov 2018
Improved robustness to adversarial examples using Lipschitz regularization of the loss
Chris Finlay
Adam M. Oberman
B. Abbasi
32
34
0
01 Oct 2018
Effects of Degradations on Deep Neural Network Architectures
Effects of Degradations on Deep Neural Network Architectures
Prasun Roy
Subhankar Ghosh
Saumik Bhattacharya
Umapada Pal
30
133
0
26 Jul 2018
Vulnerability Analysis of Chest X-Ray Image Classification Against
  Adversarial Attacks
Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks
Saeid Asgari Taghanaki
A. Das
Ghassan Hamarneh
MedIm
43
52
0
09 Jul 2018
Benchmarking Neural Network Robustness to Common Corruptions and Surface
  Variations
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations
Dan Hendrycks
Thomas G. Dietterich
OOD
27
197
0
04 Jul 2018
Non-Negative Networks Against Adversarial Attacks
Non-Negative Networks Against Adversarial Attacks
William Fleshman
Edward Raff
Jared Sylvester
Steven Forsyth
Mark McLean
AAML
27
41
0
15 Jun 2018
Hierarchical interpretations for neural network predictions
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
33
145
0
14 Jun 2018
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
Jan Svoboda
Jonathan Masci
Federico Monti
M. Bronstein
Leonidas Guibas
AAML
GNN
38
41
0
31 May 2018
Laplacian Networks: Bounding Indicator Function Smoothness for Neural
  Network Robustness
Laplacian Networks: Bounding Indicator Function Smoothness for Neural Network Robustness
Carlos Lassance
Vincent Gripon
Antonio Ortega
AAML
29
16
0
24 May 2018
Cautious Deep Learning
Cautious Deep Learning
Yotam Hechtlinger
Barnabás Póczós
Larry A. Wasserman
40
63
0
24 May 2018
Towards the first adversarially robust neural network model on MNIST
Towards the first adversarially robust neural network model on MNIST
Lukas Schott
Jonas Rauber
Matthias Bethge
Wieland Brendel
AAML
OOD
27
368
0
23 May 2018
A Simple Cache Model for Image Recognition
A Simple Cache Model for Image Recognition
Emin Orhan
VLM
41
30
0
21 May 2018
Generalizability vs. Robustness: Adversarial Examples for Medical
  Imaging
Generalizability vs. Robustness: Adversarial Examples for Medical Imaging
Magdalini Paschali
Sailesh Conjeti
Fernando Navarro
Nassir Navab
OOD
MedIm
AAML
27
91
0
23 Mar 2018
Defending against Adversarial Attack towards Deep Neural Networks via
  Collaborative Multi-task Training
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training
Derui Wang
Chaoran Li
S. Wen
Surya Nepal
Yang Xiang
AAML
41
29
0
14 Mar 2018
Retrieval-Augmented Convolutional Neural Networks for Improved
  Robustness against Adversarial Examples
Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples
Jake Zhao
Kyunghyun Cho
AAML
49
20
0
26 Feb 2018
Unravelling Robustness of Deep Learning based Face Recognition Against
  Adversarial Attacks
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks
Gaurav Goswami
Nalini Ratha
Akshay Agarwal
Richa Singh
Mayank Vatsa
AAML
35
165
0
22 Feb 2018
Generalizable Adversarial Examples Detection Based on Bi-model Decision
  Mismatch
Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch
João Monteiro
Isabela Albuquerque
Zahid Akhtar
T. Falk
AAML
46
29
0
21 Feb 2018
First-order Adversarial Vulnerability of Neural Networks and Input
  Dimension
First-order Adversarial Vulnerability of Neural Networks and Input Dimension
Carl-Johann Simon-Gabriel
Yann Ollivier
Léon Bottou
Bernhard Schölkopf
David Lopez-Paz
AAML
32
48
0
05 Feb 2018
Improving Network Robustness against Adversarial Attacks with Compact
  Convolution
Improving Network Robustness against Adversarial Attacks with Compact Convolution
Rajeev Ranjan
S. Sankaranarayanan
Carlos D. Castillo
Rama Chellappa
AAML
24
14
0
03 Dec 2017
AOGNets: Compositional Grammatical Architectures for Deep Learning
AOGNets: Compositional Grammatical Architectures for Deep Learning
Xilai Li
Xi Song
Tianfu Wu
37
25
0
15 Nov 2017
Attacking Binarized Neural Networks
Attacking Binarized Neural Networks
A. Galloway
Graham W. Taylor
M. Moussa
MQ
AAML
14
104
0
01 Nov 2017
Previous
12