ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.02735
  4. Cited By
Distilling Robust and Non-Robust Features in Adversarial Examples by
  Information Bottleneck

Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck

6 April 2022
Junho Kim
Byung-Kwan Lee
Yong Man Ro
    AAML
ArXiv (abs)PDFHTML

Papers citing "Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck"

26 / 26 papers shown
Title
Robust Feature Learning for Multi-Index Models in High Dimensions
Robust Feature Learning for Multi-Index Models in High Dimensions
Alireza Mousavi-Hosseini
Adel Javanmard
Murat A. Erdogdu
OODAAML
126
1
0
21 Oct 2024
Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data
Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data
Binghui Li
Yuanzhi Li
OOD
68
3
0
11 Oct 2024
Transferable Perturbations of Deep Feature Distributions
Transferable Perturbations of Deep Feature Distributions
Nathan Inkawhich
Kevin J. Liang
Lawrence Carin
Yiran Chen
AAML
61
87
0
27 Apr 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
227
1,858
0
03 Mar 2020
Restricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for Attribution
Karl Schulz
Leon Sixt
Federico Tombari
Tim Landgraf
FAtt
61
190
0
02 Jan 2020
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
101
490
0
03 Jul 2019
IoT Network Security from the Perspective of Adversarial Deep Learning
IoT Network Security from the Perspective of Adversarial Deep Learning
Y. Sagduyu
Yi Shi
T. Erpek
AAML
46
78
0
31 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
93
1,844
0
06 May 2019
Understanding Neural Networks via Feature Visualization: A survey
Understanding Neural Networks via Feature Visualization: A survey
Anh Nguyen
J. Yosinski
Jeff Clune
FAtt
76
161
0
18 Apr 2019
On instabilities of deep learning in image reconstruction - Does AI come
  at a cost?
On instabilities of deep learning in image reconstruction - Does AI come at a cost?
Vegard Antun
F. Renna
C. Poon
Ben Adcock
A. Hansen
59
605
0
14 Feb 2019
Excessive Invariance Causes Adversarial Vulnerability
Excessive Invariance Causes Adversarial Vulnerability
J. Jacobsen
Jens Behrmann
R. Zemel
Matthias Bethge
AAML
71
166
0
01 Nov 2018
Are adversarial examples inevitable?
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
74
283
0
06 Sep 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
108
1,782
0
30 May 2018
Adversarially Robust Generalization Requires More Data
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
Aleksander Madry
OODAAML
157
795
0
30 Apr 2018
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters
  in Deep Neural Networks
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth C. Fong
Andrea Vedaldi
FAtt
77
264
0
10 Jan 2018
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
85
1,885
0
14 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
317
12,131
0
19 Jun 2017
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILMFAtt
158
1,523
1
19 Apr 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
472
3,148
0
04 Nov 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
282
8,583
0
16 Aug 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
353
8,000
0
23 May 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAUAAML
75
3,682
0
08 Feb 2016
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
282
19,121
0
20 Dec 2014
Understanding Deep Image Representations by Inverting Them
Understanding Deep Image Representations by Inverting Them
Aravindh Mahendran
Andrea Vedaldi
FAtt
131
1,965
0
26 Nov 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
282
14,963
1
21 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAttSSL
595
15,904
0
12 Nov 2013
1