ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.10605
  4. Cited By
SoK: Explainable Machine Learning for Computer Security Applications
v1v2 (latest)

SoK: Explainable Machine Learning for Computer Security Applications

22 August 2022
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
ArXiv (abs)PDFHTML

Papers citing "SoK: Explainable Machine Learning for Computer Security Applications"

50 / 53 papers shown
Title
XG-NID: Dual-Modality Network Intrusion Detection using a Heterogeneous Graph Neural Network and Large Language Model
XG-NID: Dual-Modality Network Intrusion Detection using a Heterogeneous Graph Neural Network and Large Language Model
Yasir Ali Farrukh
S. Wali
I. Khan
Nathaniel D. Bastian
445
2
0
27 Aug 2024
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
Sicong Cao
Xiaobing Sun
Ratnadira Widyasari
David Lo
Xiaoxue Wu
...
Jiale Zhang
Bin Li
Wei Liu
Di Wu
Yixin Chen
112
8
0
26 Jan 2024
Learning State Machines to Monitor and Detect Anomalies on a Kubernetes
  Cluster
Learning State Machines to Monitor and Detect Anomalies on a Kubernetes Cluster
Clinton Cao
Agathe Blaise
S. Verwer
Filippo Rebecchi
25
13
0
28 Jun 2022
The Role of Machine Learning in Cybersecurity
The Role of Machine Learning in Cybersecurity
Giovanni Apruzzese
Pavel Laskov
Edgardo Montes de Oca
Wissam Mallouli
Luis Brdalo Rapa
A. Grammatopoulos
Fabio Di Franco
73
137
0
20 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
116
20
0
31 May 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
258
196
0
03 Feb 2022
Learning to be adversarially robust and differentially private
Learning to be adversarially robust and differentially private
Jamie Hayes
Borja Balle
M. P. Kumar
FedML
43
5
0
06 Jan 2022
DeepAID: Interpreting and Improving Deep Learning-based Anomaly
  Detection in Security Applications
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
53
78
0
23 Sep 2021
FakeWake: Understanding and Mitigating Fake Wake-up Words of Voice
  Assistants
FakeWake: Understanding and Mitigating Fake Wake-up Words of Voice Assistants
Yanjiao Chen
Yijie Bai
Richard Mitev
Kaibo Wang
A. Sadeghi
Wenyuan Xu
AAML
56
21
0
21 Sep 2021
Research trends, challenges, and emerging topics of digital forensics: A
  review of reviews
Research trends, challenges, and emerging topics of digital forensics: A review of reviews
Fran Casino
Thomas K. Dasaklis
G. Spathoulas
M. Anagnostopoulos
Amrita Ghosal
István Bor̈oc̈z
A. Solanas
Mauro Conti
Constantinos Patsakis
65
84
0
10 Aug 2021
On the Importance of Domain-specific Explanations in AI-based
  Cybersecurity Systems (Technical Report)
On the Importance of Domain-specific Explanations in AI-based Cybersecurity Systems (Technical Report)
José Paredes
J. C. Teze
Gerardo Simari
Maria Vanina Martinez
58
20
0
02 Aug 2021
A Survey on Data-driven Software Vulnerability Assessment and
  Prioritization
A Survey on Data-driven Software Vulnerability Assessment and Prioritization
T. H. Le
Huaming Chen
Muhammad Ali Babar
55
85
0
18 Jul 2021
Vulnerability Detection with Fine-grained Interpretations
Vulnerability Detection with Fine-grained Interpretations
Yi Li
Shaohua Wang
Tien N Nguyen
AAML
100
224
0
19 Jun 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
77
84
0
26 Apr 2021
To Trust or Not to Trust a Regressor: Estimating and Explaining
  Trustworthiness of Regression Predictions
To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
K. D. Bie
Ana Lucic
H. Haned
FAtt
32
11
0
14 Apr 2021
Explainability-based Backdoor Attacks Against Graph Neural Networks
Explainability-based Backdoor Attacks Against Graph Neural Networks
Jing Xu
Minhui Xue
Xue
S. Picek
75
76
0
08 Apr 2021
Efficient Training of Robust Decision Trees Against Adversarial Examples
Efficient Training of Robust Decision Trees Against Adversarial Examples
D. Vos
S. Verwer
AAML
50
36
0
18 Dec 2020
Towards falsifiable interpretability research
Towards falsifiable interpretability research
Matthew L. Leavitt
Ari S. Morcos
AAMLAI4CE
80
68
0
22 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
88
97
0
22 Sep 2020
Why an Android App is Classified as Malware? Towards Malware
  Classification Interpretation
Why an Android App is Classified as Malware? Towards Malware Classification Interpretation
Bozhi Wu
Sen Chen
Cuiyun Gao
Lingling Fan
Yang Liu
W. Wen
Michael R. Lyu
68
58
0
24 Apr 2020
Towards Quantification of Explainability in Explainable Artificial
  Intelligence Methods
Towards Quantification of Explainability in Explainable Artificial Intelligence Methods
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
XAI
63
43
0
22 Nov 2019
Towards Self-Explainable Cyber-Physical Systems
Towards Self-Explainable Cyber-Physical Systems
Mathias Blumreiter
Joel Greenyer
Javier Chiyah-Garcia
V. Klös
Maike Schwammberger
C. Sommer
Andreas Vogelsang
A. Wortmann
32
36
0
13 Aug 2019
The Price of Interpretability
The Price of Interpretability
Dimitris Bertsimas
A. Delarue
Patrick Jaillet
Sébastien Martin
50
34
0
08 Jul 2019
Treant: Training Evasion-Aware Decision Trees
Treant: Training Evasion-Aware Decision Trees
Stefano Calzavara
Claudio Lucchese
Gabriele Tolomei
S. Abebe
S. Orlando
AAML
66
41
0
02 Jul 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAMLFAtt
81
334
0
19 Jun 2019
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable
  Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
Sefi Akerman
Edan Habler
A. Shabtai
72
18
0
19 Jun 2019
Proposed Guidelines for the Responsible Use of Explainable Machine
  Learning
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILMXAIFaML
42
29
0
08 Jun 2019
Evaluating Explanation Methods for Deep Learning in Security
Evaluating Explanation Methods for Deep Learning in Security
Alexander Warnecke
Dan Arp
Christian Wressnegger
Konrad Rieck
XAIAAMLFAtt
51
93
0
05 Jun 2019
Explainable Machine Learning for Scientific Insights and Discoveries
Explainable Machine Learning for Scientific Insights and Discoveries
R. Roscher
B. Bohn
Marco F. Duarte
Jochen Garcke
XAI
92
672
0
21 May 2019
Robust Decision Trees Against Adversarial Examples
Robust Decision Trees Against Adversarial Examples
Hongge Chen
Huan Zhang
Duane S. Boning
Cho-Jui Hsieh
AAML
117
117
0
27 Feb 2019
Fairwashing: the risk of rationalization
Fairwashing: the risk of rationalization
Ulrich Aïvodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
FaML
52
148
0
28 Jan 2019
Explaining Vulnerabilities of Deep Learning to Adversarial Malware
  Binaries
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries
Christian Scano
Battista Biggio
Giovanni Lagorio
Fabio Roli
A. Armando
AAML
57
131
0
11 Jan 2019
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAMLAI4CE
123
172
0
03 Dec 2018
Learning Finite State Representations of Recurrent Policy Networks
Learning Finite State Representations of Recurrent Policy Networks
Anurag Koul
S. Greydanus
Alan Fern
52
88
0
29 Nov 2018
An Adversarial Approach for Explainable AI in Intrusion Detection
  Systems
An Adversarial Approach for Explainable AI in Intrusion Detection Systems
Daniel L. Marino
Chathurika S. Wickramasinghe
Milos Manic
AAML
42
110
0
28 Nov 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
150
1,970
0
08 Oct 2018
Verification for Machine Learning, Autonomy, and Neural Networks Survey
Verification for Machine Learning, Autonomy, and Neural Networks Survey
Weiming Xiang
Patrick Musau
A. Wild
Diego Manzanas Lopez
Nathaniel P. Hamilton
Xiaodong Yang
Joel A. Rosenfeld
Taylor T. Johnson
72
102
0
03 Oct 2018
Automated Vulnerability Detection in Source Code Using Deep
  Representation Learning
Automated Vulnerability Detection in Source Code Using Deep Representation Learning
Rebecca L. Russell
Louis Y. Kim
Lei H. Hamilton
Tomo Lazovich
Jacob A. Harer
Onur Ozdemir
Paul M. Ellingwood
M. McConley
68
534
0
11 Jul 2018
Explainable Security
Explainable Security
Luca Vigano
Daniele Magazzeni
SILM
58
74
0
11 Jul 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
84
528
0
21 Jun 2018
Recurrent Neural Network Attention Mechanisms for Interpretable System
  Log Anomaly Detection
Recurrent Neural Network Attention Mechanisms for Interpretable System Log Anomaly Detection
Andy Brown
Aaron Tuor
Brian Hutchinson
Nicole Nichols
37
173
0
13 Mar 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
130
1,409
0
08 Dec 2017
Extracting Automata from Recurrent Neural Networks Using Queries and
  Counterexamples
Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples
Gail Weiss
Yoav Goldberg
Eran Yahav
50
186
0
27 Nov 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
157
684
0
26 Nov 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAttAAML
140
871
0
29 Oct 2017
Safe Reinforcement Learning via Shielding
Safe Reinforcement Learning via Shielding
Mohammed Alshiekh
Roderick Bloem
Rüdiger Ehlers
Bettina Könighofer
S. Niekum
Ufuk Topcu
82
692
0
29 Aug 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
163
2,160
0
21 Aug 2017
Adversarial-Playground: A Visualization Suite Showing How Adversarial
  Examples Fool Deep Learning
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning
Andrew P. Norton
Yanjun Qi
AAML
48
47
0
01 Aug 2017
DeltaPhish: Detecting Phishing Webpages in Compromised Websites
DeltaPhish: Detecting Phishing Webpages in Compromised Websites
Igino Corona
Battista Biggio
M. Contini
Luca Piras
Roberto Corda
Mauro Mereu
Guido Mureddu
Andrea Valenza
Fabio Roli
74
80
0
02 Jul 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
252
4,281
0
22 Jun 2017
12
Next