ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models
v1v2v3 (latest)

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXiv (abs)PDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 1,104 papers shown
Title
Explainable Bayesian Optimization
Explainable Bayesian Optimization
Tanmay Chakraborty
Christin Seifert
Christian Wirth
129
6
0
24 Jan 2024
Facing the Elephant in the Room: Visual Prompt Tuning or Full
  Finetuning?
Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?
Cheng Han
Qifan Wang
Yiming Cui
Wenguan Wang
Lifu Huang
Siyuan Qi
Dongfang Liu
VLM
152
22
0
23 Jan 2024
The twin peaks of learning neural networks
The twin peaks of learning neural networks
Elizaveta Demyanenko
Christoph Feinauer
Enrico M. Malatesta
Luca Saglietti
62
0
0
23 Jan 2024
Unveiling the Human-like Similarities of Automatic Facial Expression
  Recognition: An Empirical Exploration through Explainable AI
Unveiling the Human-like Similarities of Automatic Facial Expression Recognition: An Empirical Exploration through Explainable AI
F. X. Gaya-Morey
S. Ramis-Guarinos
Cristina Manresa-Yee
Jose Maria Buades Rubio
CVBM
59
3
0
22 Jan 2024
Eclectic Rule Extraction for Explainability of Deep Neural Network based
  Intrusion Detection Systems
Eclectic Rule Extraction for Explainability of Deep Neural Network based Intrusion Detection Systems
Jesse Ables
Nathaniel Childers
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
46
0
0
18 Jan 2024
Deep spatial context: when attention-based models meet spatial
  regression
Deep spatial context: when attention-based models meet spatial regression
Paulina Tomaszewska
El.zbieta Sienkiewicz
Mai P. Hoang
Przemysław Biecek
48
1
0
18 Jan 2024
Even-if Explanations: Formal Foundations, Priorities and Complexity
Even-if Explanations: Formal Foundations, Priorities and Complexity
Gianvincenzo Alfano
S. Greco
Domenico Mandaglio
Francesco Parisi
Reza Shahbazian
I. Trubitsyna
82
2
0
17 Jan 2024
Inductive Models for Artificial Intelligence Systems are Insufficient
  without Good Explanations
Inductive Models for Artificial Intelligence Systems are Insufficient without Good Explanations
Udesh Habaraduwa
28
0
0
17 Jan 2024
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level
  Image-Concept Alignment
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level Image-Concept Alignment
Yequan Bie
Luyang Luo
Hao Chen
79
15
0
16 Jan 2024
Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents
Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents
Quentin Delfosse
Sebastian Sztwiertnia
M. Rothermel
Wolfgang Stammer
Kristian Kersting
122
20
0
11 Jan 2024
Towards Explainable Artificial Intelligence (XAI): A Data Mining
  Perspective
Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective
Haoyi Xiong
Xuhong Li
Xiaofei Zhang
Jiamin Chen
Xinhao Sun
Yuchen Li
Zeyi Sun
Jundong Li
XAI
136
9
0
09 Jan 2024
Towards Directive Explanations: Crafting Explainable AI Systems for
  Actionable Human-AI Interactions
Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions
Aditya Bhattacharya
69
9
0
29 Dec 2023
SoK: Taming the Triangle -- On the Interplays between Fairness,
  Interpretability and Privacy in Machine Learning
SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning
Julien Ferry
Ulrich Aïvodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
67
5
0
22 Dec 2023
Concept-based Explainable Artificial Intelligence: A Survey
Concept-based Explainable Artificial Intelligence: A Survey
Eleonora Poeta
Gabriele Ciravegna
Eliana Pastor
Tania Cerquitelli
Elena Baralis
LRMXAI
104
56
0
20 Dec 2023
On Early Detection of Hallucinations in Factual Question Answering
On Early Detection of Hallucinations in Factual Question Answering
Ben Snyder
Marius Moisescu
Muhammad Bilal Zafar
HILM
123
28
0
19 Dec 2023
Robust Stochastic Graph Generator for Counterfactual Explanations
Robust Stochastic Graph Generator for Counterfactual Explanations
Mario Alfonso Prado-Romero
Bardh Prenkaj
Giovanni Stilo
CML
76
4
0
18 Dec 2023
Perceptual Musical Features for Interpretable Audio Tagging
Perceptual Musical Features for Interpretable Audio Tagging
Vassilis Lyberatos
Spyridon Kantarelis
Edmund Dervakos
Giorgos Stamou
49
7
0
18 Dec 2023
Entropy Causal Graphs for Multivariate Time Series Anomaly Detection
Entropy Causal Graphs for Multivariate Time Series Anomaly Detection
F. Febrinanto
Kristen Moore
Chandra Thapa
Mujie Liu
Vidya Saikrishna
Jiangang Ma
Xiwei Xu
CML
75
3
0
15 Dec 2023
Evaluative Item-Contrastive Explanations in Rankings
Evaluative Item-Contrastive Explanations in Rankings
Alessandro Castelnovo
Riccardo Crupi
Nicolo Mombelli
Gabriele Nanino
D. Regoli
XAIELM
63
2
0
14 Dec 2023
Accelerating the Global Aggregation of Local Explanations
Accelerating the Global Aggregation of Local Explanations
Alon Mor
Yonatan Belinkov
B. Kimelfeld
FAtt
65
4
0
13 Dec 2023
Clash of the Explainers: Argumentation for Context-Appropriate
  Explanations
Clash of the Explainers: Argumentation for Context-Appropriate Explanations
Leila Methnani
Virginia Dignum
Andreas Theodorou
23
0
0
12 Dec 2023
SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the
  Survival Models
SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models
Lev V. Utkin
Danila Eremenko
A. Konstantinov
67
0
0
11 Dec 2023
Promoting Counterfactual Robustness through Diversity
Promoting Counterfactual Robustness through Diversity
Francesco Leofante
Nico Potyka
28
8
0
11 Dec 2023
FM-G-CAM: A Holistic Approach for Explainable AI in Computer Vision
FM-G-CAM: A Holistic Approach for Explainable AI in Computer Vision
Ravidu Suien Rammuni Silva
Jordan J. Bird
FAtt
53
1
0
10 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
159
2
0
07 Dec 2023
Trust, distrust, and appropriate reliance in (X)AI: a survey of
  empirical evaluation of user trust
Trust, distrust, and appropriate reliance in (X)AI: a survey of empirical evaluation of user trust
Roel W. Visser
Tobias M. Peters
Ingrid Scharlau
Barbara Hammer
41
7
0
04 Dec 2023
Interpretable Knowledge Tracing via Response Influence-based
  Counterfactual Reasoning
Interpretable Knowledge Tracing via Response Influence-based Counterfactual Reasoning
Jiajun Cui
Minghe Yu
Bo Jiang
Aimin Zhou
Jianyong Wang
Wei Zhang
77
4
0
01 Dec 2023
Image segmentation with traveling waves in an exactly solvable recurrent
  neural network
Image segmentation with traveling waves in an exactly solvable recurrent neural network
L. Liboni
Roberto C. Budzinski
Alexandra N. Busch
Sindy Löwe
Thomas Anderson Keller
Max Welling
L. Muller
53
11
0
28 Nov 2023
An exact mathematical description of computation with transient
  spatiotemporal dynamics in a complex-valued neural network
An exact mathematical description of computation with transient spatiotemporal dynamics in a complex-valued neural network
Roberto C. Budzinski
Alexandra N. Busch
Samuel Mestern
Erwan Martin
L. Liboni
F. Pasini
Ján Mináč
Todd Coleman
Wataru Inoue
L. Muller
53
4
0
28 Nov 2023
Machine Learning For An Explainable Cost Prediction of Medical Insurance
Machine Learning For An Explainable Cost Prediction of Medical Insurance
U. Orji
Elochukwu A. Ukwandu
58
35
0
23 Nov 2023
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
90
0
0
20 Nov 2023
Designing Interpretable ML System to Enhance Trust in Healthcare: A
  Systematic Review to Proposed Responsible Clinician-AI-Collaboration
  Framework
Designing Interpretable ML System to Enhance Trust in Healthcare: A Systematic Review to Proposed Responsible Clinician-AI-Collaboration Framework
Elham Nasarian
R. Alizadehsani
U. Acharya
Kwok-Leung Tsui
71
57
0
18 Nov 2023
GAIA: Delving into Gradient-based Attribution Abnormality for
  Out-of-distribution Detection
GAIA: Delving into Gradient-based Attribution Abnormality for Out-of-distribution Detection
Jinggang Chen
Junjie Li
Xiaoyang Qu
Jianzong Wang
Jiguang Wan
Jing Xiao
OODD
67
10
0
16 Nov 2023
Model Agnostic Explainable Selective Regression via Uncertainty
  Estimation
Model Agnostic Explainable Selective Regression via Uncertainty Estimation
Andrea Pugnana
Carlos Mougan
Dan Saattrup Nielsen
78
0
0
15 Nov 2023
Exploring Variational Auto-Encoder Architectures, Configurations, and
  Datasets for Generative Music Explainable AI
Exploring Variational Auto-Encoder Architectures, Configurations, and Datasets for Generative Music Explainable AI
Nick Bryan-Kinns
Bingyuan Zhang
Songyan Zhao
Berker Banar
DRLMGen
102
13
0
14 Nov 2023
Feedforward neural networks as statistical models: Improving
  interpretability through uncertainty quantification
Feedforward neural networks as statistical models: Improving interpretability through uncertainty quantification
Andrew McInerney
Kevin Burke
AI4CE
18
0
0
14 Nov 2023
MetaSymNet: A Dynamic Symbolic Regression Network Capable of Evolving
  into Arbitrary Formulations
MetaSymNet: A Dynamic Symbolic Regression Network Capable of Evolving into Arbitrary Formulations
Yanjie Li
Weijun Li
Lina Yu
Min Wu
Jinyi Liu
Wenqiang Li
Meilan Hao
Shu Wei
Yusong Deng
84
4
0
13 Nov 2023
Deep Natural Language Feature Learning for Interpretable Prediction
Deep Natural Language Feature Learning for Interpretable Prediction
Felipe Urrutia
Cristian Buc
Valentin Barriere
75
2
0
09 Nov 2023
Does Explainable AI Have Moral Value?
Does Explainable AI Have Moral Value?
Joshua L.M. Brand
Luca Nannini
XAI
70
0
0
05 Nov 2023
Feature Attribution Explanations for Spiking Neural Networks
Feature Attribution Explanations for Spiking Neural Networks
Elisa Nguyen
Meike Nauta
G. Englebienne
Christin Seifert
FAttAAMLLRM
38
0
0
02 Nov 2023
Self-Influence Guided Data Reweighting for Language Model Pre-training
Self-Influence Guided Data Reweighting for Language Model Pre-training
Megh Thakkar
Tolga Bolukbasi
Sriram Ganapathy
Shikhar Vashishth
Sarath Chandar
Partha P. Talukdar
MILM
109
26
0
02 Nov 2023
Notion of Explainable Artificial Intelligence -- An Empirical
  Investigation from A Users Perspective
Notion of Explainable Artificial Intelligence -- An Empirical Investigation from A Users Perspective
A. Haque
A. Najmul Islam
Patrick Mikalef
57
1
0
01 Nov 2023
Learning impartial policies for sequential counterfactual explanations
  using Deep Reinforcement Learning
Learning impartial policies for sequential counterfactual explanations using Deep Reinforcement Learning
E. Panagiotou
Eirini Ntoutsi
CMLOffRLBDL
68
0
0
01 Nov 2023
Exploring Practitioner Perspectives On Training Data Attribution
  Explanations
Exploring Practitioner Perspectives On Training Data Attribution Explanations
Elisa Nguyen
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
TDI
560
1
0
31 Oct 2023
Hidden Conflicts in Neural Networks and Their Implications for Explainability
Hidden Conflicts in Neural Networks and Their Implications for Explainability
Adam Dejl
Hamed Ayoobi
Hamed Ayoobi
Matthew Williams
Francesca Toni
FAttBDL
133
3
0
31 Oct 2023
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open
  Challenges and Interdisciplinary Research Directions
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions
Luca Longo
Mario Brcic
Federico Cabitza
Jaesik Choi
Roberto Confalonieri
...
Andrés Páez
Wojciech Samek
Johannes Schneider
Timo Speith
Simone Stumpf
150
225
0
30 Oct 2023
How Well Do Feature-Additive Explainers Explain Feature-Additive
  Predictors?
How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?
Zachariah Carmichael
Walter J. Scheirer
FAtt
68
4
0
27 Oct 2023
Using Slisemap to interpret physical data
Using Slisemap to interpret physical data
Lauri Seppäläinen
Anton Björklund
V. Besel
Kai Puolamäki
67
1
0
24 Oct 2023
XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series
  Classification
XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series Classification
Jacqueline Höllig
Steffen Thoma
Florian Grimm
AI4TS
60
1
0
23 Oct 2023
Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault
  Detection with Deep Learning
Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault Detection with Deep Learning
Thomas Decker
Michael Lebacher
Volker Tresp
29
13
0
19 Oct 2023
Previous
123456...212223
Next