ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.07584
  4. Cited By
On quantitative aspects of model interpretability

On quantitative aspects of model interpretability

15 July 2020
An-phi Nguyen
María Rodríguez Martínez
ArXivPDFHTML

Papers citing "On quantitative aspects of model interpretability"

25 / 25 papers shown
Title
A constraints-based approach to fully interpretable neural networks for detecting learner behaviors
A constraints-based approach to fully interpretable neural networks for detecting learner behaviors
Juan D. Pinto
Luc Paquette
48
0
0
10 Apr 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
105
1
0
13 Mar 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
3
0
03 Jan 2025
Reconciling Privacy and Explainability in High-Stakes: A Systematic Inquiry
Reconciling Privacy and Explainability in High-Stakes: A Systematic Inquiry
Supriya Manna
Niladri Sett
168
0
0
30 Dec 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
59
5
0
03 May 2024
Global Counterfactual Directions
Global Counterfactual Directions
Bartlomiej Sobieski
P. Biecek
DiffM
58
5
0
18 Apr 2024
Towards Evaluating Explanations of Vision Transformers for Medical
  Imaging
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
Piotr Komorowski
Hubert Baniecki
P. Biecek
MedIm
38
27
0
12 Apr 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
34
1
0
17 Feb 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
39
18
0
10 Nov 2022
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
  by enhancing SHapley Additive exPlanations
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits by enhancing SHapley Additive exPlanations
Ricardo Müller
Marco Schreyer
Timur Sattarov
Damian Borth
AAML
MLAU
35
7
0
19 Sep 2022
Evaluating the Explainers: Black-Box Explainable Machine Learning for
  Student Success Prediction in MOOCs
Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs
Vinitra Swamy
Bahar Radmehr
Natasa Krco
Mirko Marras
Tanja Kaser
FAtt
ELM
13
40
0
01 Jul 2022
Interpretation Quality Score for Measuring the Quality of
  interpretability methods
Interpretation Quality Score for Measuring the Quality of interpretability methods
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
XAI
19
5
0
24 May 2022
Enriching Artificial Intelligence Explanations with Knowledge Fragments
Enriching Artificial Intelligence Explanations with Knowledge Fragments
Jože M. Rožanec
Elena Trajkova
I. Novalija
Patrik Zajec
K. Kenda
B. Fortuna
Dunja Mladenić
28
9
0
12 Apr 2022
Explainability in Process Outcome Prediction: Guidelines to Obtain
  Interpretable and Faithful Models
Explainability in Process Outcome Prediction: Guidelines to Obtain Interpretable and Faithful Models
Alexander Stevens
Johannes De Smedt
XAI
FaML
17
12
0
30 Mar 2022
XAI in the context of Predictive Process Monitoring: Too much to Reveal
XAI in the context of Predictive Process Monitoring: Too much to Reveal
Ghada Elkhawaga
Mervat Abuelkheir
M. Reichert
20
1
0
16 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
169
0
14 Feb 2022
A Survey on Methods and Metrics for the Assessment of Explainability
  under the Proposed AI Act
A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act
Francesco Sovrano
Salvatore Sapienza
M. Palmirani
F. Vitali
14
17
0
21 Oct 2021
An Objective Metric for Explainable AI: How and Why to Estimate the
  Degree of Explainability
An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability
Francesco Sovrano
F. Vitali
37
30
0
11 Sep 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
Willie Neiswanger
37
65
0
23 Jun 2021
A Framework for Evaluating Post Hoc Feature-Additive Explainers
A Framework for Evaluating Post Hoc Feature-Additive Explainers
Zachariah Carmichael
Walter J. Scheirer
FAtt
46
4
0
15 Jun 2021
Pitfalls of Explainable ML: An Industry Perspective
Pitfalls of Explainable ML: An Industry Perspective
Sahil Verma
Aditya Lahiri
John P. Dickerson
Su-In Lee
XAI
16
9
0
14 Jun 2021
Quantifying Explainers of Graph Neural Networks in Computational
  Pathology
Quantifying Explainers of Graph Neural Networks in Computational Pathology
Guillaume Jaume
Pushpak Pati
Behzad Bozorgtabar
Antonio Foncubierta-Rodríguez
Florinda Feroce
A. Anniciello
T. Rau
Jean-Philippe Thiran
M. Gabrani
O. Goksel
FAtt
26
76
0
25 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
44
7
0
23 Oct 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,696
0
28 Feb 2017
1