ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.00891
  4. Cited By
Interpretable Deep Learning under Fire

Interpretable Deep Learning under Fire

3 December 2018
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
    AAML
    AI4CE
ArXivPDFHTML

Papers citing "Interpretable Deep Learning under Fire"

35 / 35 papers shown
Title
Explainable Graph Neural Networks Under Fire
Explainable Graph Neural Networks Under Fire
Zhong Li
Simon Geisler
Yuhang Wang
Stephan Günnemann
M. Leeuwen
AAML
43
0
0
10 Jun 2024
SlowPerception: Physical-World Latency Attack against Visual Perception
  in Autonomous Driving
SlowPerception: Physical-World Latency Attack against Visual Perception in Autonomous Driving
Chen Ma
Ningfei Wang
Zhengyu Zhao
Qi Alfred Chen
Chao Shen
49
0
0
09 Jun 2024
From Attack to Defense: Insights into Deep Learning Security Measures in
  Black-Box Settings
From Attack to Defense: Insights into Deep Learning Security Measures in Black-Box Settings
Firuz Juraev
Mohammed Abuhamad
Eric Chan-Tin
George K. Thiruvathukal
Tamer Abuhmed
AAML
41
0
0
03 May 2024
Stability of Explainable Recommendation
Stability of Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
38
1
0
03 May 2024
Are Classification Robustness and Explanation Robustness Really Strongly
  Correlated? An Analysis Through Input Loss Landscape
Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape
Tiejin Chen
Wenwang Huang
Linsey Pang
Dongsheng Luo
Hua Wei
OOD
49
0
0
09 Mar 2024
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
47
2
0
07 Dec 2023
Single-Class Target-Specific Attack against Interpretable Deep Learning
  Systems
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Eldor Abdukhamidov
Mohammed Abuhamad
George K. Thiruvathukal
Hyoungshick Kim
Tamer Abuhmed
AAML
27
2
0
12 Jul 2023
Valid P-Value for Deep Learning-Driven Salient Region
Valid P-Value for Deep Learning-Driven Salient Region
Daiki Miwa
Vo Nguyen Le Duy
I. Takeuchi
FAtt
AAML
32
14
0
06 Jan 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial
  Perturbations against Interpretable Deep Learning
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
36
9
0
29 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
34
40
0
22 Aug 2022
Efficiently Training Low-Curvature Neural Networks
Efficiently Training Low-Curvature Neural Networks
Suraj Srinivas
Kyle Matoba
Himabindu Lakkaraju
F. Fleuret
AAML
23
15
0
14 Jun 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
19
1
0
30 Jan 2022
Low-Rank Constraints for Fast Inference in Structured Models
Low-Rank Constraints for Fast Inference in Structured Models
Justin T. Chiu
Yuntian Deng
Alexander M. Rush
BDL
32
13
0
08 Jan 2022
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep
  Neural Network Systems
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
Bao Gia Doan
Minhui Xue
Shiqing Ma
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
41
53
0
19 Nov 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly
  Detection in Security Applications
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
24
76
0
23 Sep 2021
Jointly Attacking Graph Neural Network and its Explanations
Jointly Attacking Graph Neural Network and its Explanations
Wenqi Fan
Wei Jin
Xiaorui Liu
Han Xu
Xianfeng Tang
Suhang Wang
Qing Li
Jiliang Tang
Jianping Wang
Charu C. Aggarwal
AAML
42
28
0
07 Aug 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
  based Perception in Autonomous Driving Under Physical-World Attacks
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks
Yulong Cao*
Ningfei Wang*
Chaowei Xiao
Dawei Yang
Jin Fang
Ruigang Yang
Qi Alfred Chen
Mingyan D. Liu
Bo-wen Li
AAML
24
218
0
17 Jun 2021
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
29
31
0
16 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
24
18
0
10 Dec 2020
Deep Interpretable Classification and Weakly-Supervised Segmentation of
  Histology Images via Max-Min Uncertainty
Deep Interpretable Classification and Weakly-Supervised Segmentation of Histology Images via Max-Min Uncertainty
Soufiane Belharbi
Jérôme Rony
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
19
52
0
14 Nov 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
33
51
0
03 Sep 2020
Can We Trust Your Explanations? Sanity Checks for Interpreters in
  Android Malware Analysis
Can We Trust Your Explanations? Sanity Checks for Interpreters in Android Malware Analysis
Ming Fan
Wenying Wei
Xiaofei Xie
Yang Liu
X. Guan
Ting Liu
FAtt
AAML
22
36
0
13 Aug 2020
A simple defense against adversarial attacks on heatmap explanations
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
33
37
0
13 Jul 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
27
66
0
26 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei Wang
AAML
54
18
0
09 Jun 2020
Deep Weakly-Supervised Learning Methods for Classification and
  Localization in Histology Images: A Survey
Deep Weakly-Supervised Learning Methods for Classification and Localization in Histology Images: A Survey
Jérôme Rony
Soufiane Belharbi
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
27
70
0
08 Sep 2019
Interpreting Adversarial Examples by Activation Promotion and
  Suppression
Interpreting Adversarial Examples by Activation Promotion and Suppression
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
X. Lin
AAML
FAtt
24
43
0
03 Apr 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
422
11,715
0
09 Mar 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,113
0
04 Nov 2016
1