ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.02407
  4. Cited By
Explaining Anomalies Detected by Autoencoders Using SHAP

Explaining Anomalies Detected by Autoencoders Using SHAP

6 March 2019
Liat Antwarg
Ronnie Mindlin Miller
Bracha Shapira
Lior Rokach
    FAtt
    TDI
ArXivPDFHTML

Papers citing "Explaining Anomalies Detected by Autoencoders Using SHAP"

31 / 31 papers shown
Title
Can I trust my anomaly detection system? A case study based on
  explainable AI
Can I trust my anomaly detection system? A case study based on explainable AI
Muhammad Rashid
E. Amparore
Enrico Ferrari
Damiano Verda
35
0
0
29 Jul 2024
Approximating the Core via Iterative Coalition Sampling
Approximating the Core via Iterative Coalition Sampling
I. Gemp
Marc Lanctot
Luke Marris
Yiran Mao
Edgar A. Duénez-Guzmán
...
Michael Kaisers
Daniel Hennes
Kalesha Bullard
Kate Larson
Yoram Bachrach
19
1
0
06 Feb 2024
Analyzing Key Users' behavior trends in Volunteer-Based Networks
Analyzing Key Users' behavior trends in Volunteer-Based Networks
Nofar Piterman
Tamar Makov
Michael Fire
13
0
0
04 Oct 2023
Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection
  Model
Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection Model
Khushnaseeb Roshan
Aasim Zafar
17
16
0
31 Jul 2023
Detection of Sensor-To-Sensor Variations using Explainable AI
Detection of Sensor-To-Sensor Variations using Explainable AI
Sarah Seifi
Sebastian A. Schober
Cecilia Carbonelli
Lorenzo Servadei
Robert Wille
25
0
0
19 Jun 2023
Unlocking Layer-wise Relevance Propagation for Autoencoders
Unlocking Layer-wise Relevance Propagation for Autoencoders
Kenyu Kobayashi
Renata Khasanova
Arno Schneuwly
Felix Schmidt
Matteo Casserini
FAtt
24
0
0
21 Mar 2023
Interpretable Ensembles of Hyper-Rectangles as Base Models
Interpretable Ensembles of Hyper-Rectangles as Base Models
A. Konstantinov
Lev V. Utkin
30
3
0
15 Mar 2023
Explainable Artificial Intelligence and Cybersecurity: A Systematic
  Literature Review
Explainable Artificial Intelligence and Cybersecurity: A Systematic Literature Review
C. Mendes
T. N. Rios
24
7
0
27 Feb 2023
A Survey on Explainable Anomaly Detection
A Survey on Explainable Anomaly Detection
Zhong Li
Yuxuan Zhu
M. Leeuwen
38
73
0
13 Oct 2022
Fine-grained Anomaly Detection in Sequential Data via Counterfactual
  Explanations
Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations
He Cheng
Depeng Xu
Shuhan Yuan
Xintao Wu
AI4TS
35
3
0
09 Oct 2022
Explaining Anomalies using Denoising Autoencoders for Financial Tabular
  Data
Explaining Anomalies using Denoising Autoencoders for Financial Tabular Data
Timur Sattarov
Dayananda Herurkar
Jörn Hees
30
8
0
21 Sep 2022
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
  by enhancing SHapley Additive exPlanations
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits by enhancing SHapley Additive exPlanations
Ricardo Müller
Marco Schreyer
Timur Sattarov
Damian Borth
AAML
MLAU
24
7
0
19 Sep 2022
Explanation Method for Anomaly Detection on Mixed Numerical and
  Categorical Spaces
Explanation Method for Anomaly Detection on Mixed Numerical and Categorical Spaces
Iñigo López-Riobóo Botana
Carlos Eiras-Franco
Julio César Hernández Castro
Amparo Alonso-Betanzos
21
0
0
09 Sep 2022
A general-purpose method for applying Explainable AI for Anomaly
  Detection
A general-purpose method for applying Explainable AI for Anomaly Detection
John Sipple
Abdou Youssef
27
14
0
23 Jul 2022
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
  Methods, Challenges, and Opportunities
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Jesse Ables
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
50
71
0
13 Jul 2022
Towards Responsible AI for Financial Transactions
Towards Responsible AI for Financial Transactions
Charl Maree
Jan Erik Modal
C. Omlin
AAML
13
17
0
06 Jun 2022
PIXAL: Anomaly Reasoning with Visual Analytics
PIXAL: Anomaly Reasoning with Visual Analytics
Brian Montambault
C. Brumar
M. Behrisch
Remco Chang
18
2
0
23 May 2022
Trustworthy Anomaly Detection: A Survey
Trustworthy Anomaly Detection: A Survey
Shuhan Yuan
Xintao Wu
FaML
15
8
0
15 Feb 2022
Utilizing XAI technique to improve autoencoder based model for computer
  network anomaly detection with shapley additive explanation(SHAP)
Utilizing XAI technique to improve autoencoder based model for computer network anomaly detection with shapley additive explanation(SHAP)
Khushnaseeb Roshan
Aasim Zafar
AAML
14
50
0
14 Dec 2021
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
  deep learning
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised deep learning
Bang Xiang Yong
Alexandra Brintrup
21
6
0
19 Oct 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly
  Detection in Security Applications
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
21
76
0
23 Sep 2021
An Imprecise SHAP as a Tool for Explaining the Class Probability
  Distributions under Limited Training Data
An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data
Lev V. Utkin
A. Konstantinov
Kirill Vishniakov
FAtt
29
5
0
16 Jun 2021
Explainable Machine Learning for Fraud Detection
Explainable Machine Learning for Fraud Detection
I. Psychoula
A. Gutmann
Pradip Mainali
Sharon H. Lee
Paul Dunphy
F. Petitcolas
FaML
60
36
0
13 May 2021
Interpretation of multi-label classification models using shapley values
Interpretation of multi-label classification models using shapley values
Shikun Chen
FAtt
TDI
26
9
0
21 Apr 2021
A new interpretable unsupervised anomaly detection method based on
  residual explanation
A new interpretable unsupervised anomaly detection method based on residual explanation
David F. N. Oliveira
L. Vismari
A. M. Nascimento
J. R. de Almeida
P. Cugnasca
J. Camargo
L. Almeida
Rafael Gripp
Marcelo M. Neves
AAML
13
17
0
14 Mar 2021
Ensembles of Random SHAPs
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
16
20
0
04 Mar 2021
Does the dataset meet your expectations? Explaining sample
  representation in image data
Does the dataset meet your expectations? Explaining sample representation in image data
Dhasarathy Parthasarathy
Anton Johansson
14
0
0
06 Dec 2020
On the Nature and Types of Anomalies: A Review of Deviations in Data
On the Nature and Types of Anomalies: A Review of Deviations in Data
Ralph Foorthuis
6
87
0
30 Jul 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
24
593
0
16 Jun 2020
Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly
  Detection
Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection
Naoya Takeishi
FAtt
16
33
0
08 Sep 2019
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable
  Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
Sefi Akerman
Edan Habler
A. Shabtai
13
17
0
19 Jun 2019
1