ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.05386
  4. Cited By
Model-Agnostic Interpretability of Machine Learning

Model-Agnostic Interpretability of Machine Learning

16 June 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing "Model-Agnostic Interpretability of Machine Learning"

50 / 118 papers shown
Title
RanDeS: Randomized Delta Superposition for Multi-Model Compression
RanDeS: Randomized Delta Superposition for Multi-Model Compression
Hangyu Zhou
Aaron Gokaslan
Volodymyr Kuleshov
Bharath Hariharan
MoMe
32
0
0
16 May 2025
neuralGAM: An R Package for Fitting Generalized Additive Neural Networks
neuralGAM: An R Package for Fitting Generalized Additive Neural Networks
Ines Ortega-Fernandez
Marta Sestelo
41
0
0
13 May 2025
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Dima Alattal
Asal Khoshravan Azar
P. Myles
Richard Branson
Hatim Abdulhussein
Allan Tucker
29
0
0
10 May 2025
Retrieval Augmented Generation Evaluation for Health Documents
Retrieval Augmented Generation Evaluation for Health Documents
Mario Ceresa
Lorenzo Bertolini
Valentin Comte
Nicholas Spadaro
Barbara Raffael
...
Sergio Consoli
Amalia Muñoz Piñeiro
Alex Patak
Maddalena Querci
Tobias Wiesenthal
RALM
3DV
44
0
1
07 May 2025
Diffusion Attribution Score: Evaluating Training Data Influence in Diffusion Models
Diffusion Attribution Score: Evaluating Training Data Influence in Diffusion Models
Jinxu Lin
Linwei Tao
Minjing Dong
Chang Xu
TDI
46
2
0
24 Oct 2024
Time Can Invalidate Algorithmic Recourse
Time Can Invalidate Algorithmic Recourse
Giovanni De Toni
Stefano Teso
Bruno Lepri
Andrea Passerini
42
0
0
10 Oct 2024
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
87
1
0
09 Oct 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
85
28
0
30 Aug 2024
A prototype-based model for set classification
A prototype-based model for set classification
Mohammad Mohammadi
Sreejita Ghosh
VLM
112
1
0
25 Aug 2024
CHILLI: A data context-aware perturbation method for XAI
CHILLI: A data context-aware perturbation method for XAI
Saif Anwar
Nathan Griffiths
A. Bhalerao
T. Popham
44
0
0
10 Jul 2024
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Mohsen Fayyaz
Fan Yin
Jiao Sun
Nanyun Peng
70
3
0
28 Jun 2024
CONFINE: Conformal Prediction for Interpretable Neural Networks
CONFINE: Conformal Prediction for Interpretable Neural Networks
Linhui Huang
S. Lala
N. Jha
71
2
0
01 Jun 2024
Explaining Predictions by Characteristic Rules
Explaining Predictions by Characteristic Rules
Amr Alkhatib
Henrik Bostrom
Michalis Vazirgiannis
40
5
0
31 May 2024
Model Interpretation and Explainability: Towards Creating Transparency
  in Prediction Models
Model Interpretation and Explainability: Towards Creating Transparency in Prediction Models
D. Kridel
Jacob Dineen
Daniel R. Dolk
David G. Castillo
28
4
0
31 May 2024
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
66
2
0
25 Apr 2024
Segmentation, Classification and Interpretation of Breast Cancer Medical
  Images using Human-in-the-Loop Machine Learning
Segmentation, Classification and Interpretation of Breast Cancer Medical Images using Human-in-the-Loop Machine Learning
David Vázquez-Lema
E. Mosqueira-Rey
Elena Hernández-Pereira
Carlos Fernández-Lozano
Fernando Seara-Romera
Jorge Pombo-Otero
LM&MA
41
1
0
29 Mar 2024
Explainable Learning with Gaussian Processes
Explainable Learning with Gaussian Processes
Kurt Butler
Guanchao Feng
Petar M. Djurić
44
1
0
11 Mar 2024
Succinct Interaction-Aware Explanations
Succinct Interaction-Aware Explanations
Sascha Xu
Joscha Cuppers
Jilles Vreeken
FAtt
29
0
0
08 Feb 2024
Improving the accuracy of freight mode choice models: A case study using
  the 2017 CFS PUF data set and ensemble learning techniques
Improving the accuracy of freight mode choice models: A case study using the 2017 CFS PUF data set and ensemble learning techniques
Diyi Liu
Hyeonsup Lim
M. Uddin
Yuandong Liu
Lee D. Han
Ho-Ling Hwang
Shih-Miao Chin
13
0
0
01 Feb 2024
Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity
  Analysis Methods for Time-Series Deep Learning Models
Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models
Zhengguang Wang
14
0
0
29 Jan 2024
Is K-fold cross validation the best model selection method for Machine
  Learning?
Is K-fold cross validation the best model selection method for Machine Learning?
Juan M Gorriz
F. Segovia
J. Ramírez
A. Ortiz
J. Suckling
47
17
0
29 Jan 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
41
78
0
25 Jan 2024
Real-time Neural Network Inference on Extremely Weak Devices: Agile
  Offloading with Explainable AI
Real-time Neural Network Inference on Extremely Weak Devices: Agile Offloading with Explainable AI
Kai Huang
Wei Gao
27
35
0
21 Dec 2023
Navigating the Structured What-If Spaces: Counterfactual Generation via
  Structured Diffusion
Navigating the Structured What-If Spaces: Counterfactual Generation via Structured Diffusion
Nishtha Madaan
Srikanta J. Bedathur
DiffM
43
0
0
21 Dec 2023
Toward enriched Cognitive Learning with XAI
Toward enriched Cognitive Learning with XAI
M. Nizami
Ulrike Kuhl
J. Alonso-Moral
Alessandro Bogliolo
32
1
0
19 Dec 2023
Towards Interpretable Classification of Leukocytes based on Deep
  Learning
Towards Interpretable Classification of Leukocytes based on Deep Learning
S. Röhrl
Johannes Groll
M. Lengl
Simon Schumann
C. Klenk
D. Heim
Martin Knopp
Oliver Hayden
Klaus Diepold
30
2
0
24 Nov 2023
Intriguing Properties of Data Attribution on Diffusion Models
Intriguing Properties of Data Attribution on Diffusion Models
Xiaosen Zheng
Tianyu Pang
Chao Du
Jing Jiang
Min Lin
TDI
43
20
1
01 Nov 2023
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak
  Supervision
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak Supervision
Daniel Hajialigol
Hanwen Liu
Xuan Wang
VLM
21
5
0
31 Oct 2023
Text2Topic: Multi-Label Text Classification System for Efficient Topic
  Detection in User Generated Content with Zero-Shot Capabilities
Text2Topic: Multi-Label Text Classification System for Efficient Topic Detection in User Generated Content with Zero-Shot Capabilities
Fengjun Wang
Moran Beladev
Ofri Kleinfeld
Elina Frayerman
Tal Shachar
Eran Fainman
Karen Lastmann Assaraf
Sarai Mizrachi
Benjamin Wang
VLM
15
9
0
23 Oct 2023
Making informed decisions in cutting tool maintenance in milling: A KNN-based model agnostic approach
Making informed decisions in cutting tool maintenance in milling: A KNN-based model agnostic approach
Aditya M. Rahalkar
Om M. Khare
A. Patange
Abhishek D. Patange
Rohan N. Soman
15
1
0
23 Oct 2023
Explainable Depression Symptom Detection in Social Media
Explainable Depression Symptom Detection in Social Media
Eliseo Bao Souto
Anxo Perez
Javier Parapar
32
5
0
20 Oct 2023
Natural Example-Based Explainability: a Survey
Natural Example-Based Explainability: a Survey
Antonin Poché
Lucas Hervier
M. Bakkay
XAI
33
12
0
05 Sep 2023
TRIVEA: Transparent Ranking Interpretation using Visual Explanation of
  Black-Box Algorithmic Rankers
TRIVEA: Transparent Ranking Interpretation using Visual Explanation of Black-Box Algorithmic Rankers
Jun Yuan
Kaustav Bhattacharjee
A. Islam
Aritra Dasgupta
28
2
0
28 Aug 2023
Software Doping Analysis for Human Oversight
Software Doping Analysis for Human Oversight
Sebastian Biewer
Kevin Baum
Sarah Sterz
Holger Hermanns
Sven Hetmank
Markus Langer
Anne Lauber-Rönsberg
Franz Lehr
27
4
0
11 Aug 2023
Diagnosis Uncertain Models For Medical Risk Prediction
Diagnosis Uncertain Models For Medical Risk Prediction
A. Peysakhovich
Rich Caruana
Y. Aphinyanaphongs
21
0
0
29 Jun 2023
Designing Explainable Predictive Machine Learning Artifacts: Methodology
  and Practical Demonstration
Designing Explainable Predictive Machine Learning Artifacts: Methodology and Practical Demonstration
Giacomo Welsch
Peter Kowalczyk
35
1
0
20 Jun 2023
Explaining black box text modules in natural language with language
  models
Explaining black box text modules in natural language with language models
Chandan Singh
Aliyah R. Hsu
Richard Antonello
Shailee Jain
Alexander G. Huth
Bin-Xia Yu
Jianfeng Gao
MILM
36
47
0
17 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
23
37
0
20 Apr 2023
Feature Importance: A Closer Look at Shapley Values and LOCO
Feature Importance: A Closer Look at Shapley Values and LOCO
I. Verdinelli
Larry A. Wasserman
FAtt
TDI
50
22
0
10 Mar 2023
Multi-resolution Interpretation and Diagnostics Tool for Natural
  Language Classifiers
Multi-resolution Interpretation and Diagnostics Tool for Natural Language Classifiers
P. Jalali
Nengfeng Zhou
Yufei Yu
AAML
33
0
0
06 Mar 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
23
98
0
04 Feb 2023
Case-Base Neural Networks: survival analysis with time-varying,
  higher-order interactions
Case-Base Neural Networks: survival analysis with time-varying, higher-order interactions
Jesse Islam
M. Turgeon
R. Sladek
S. Bhatnagar
CML
23
0
0
16 Jan 2023
The State of the Art in Enhancing Trust in Machine Learning Models with
  the Use of Visualizations
The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations
Angelos Chatzimparmpas
R. Martins
I. Jusufi
K. Kucher
Fabrice Rossi
A. Kerren
FAtt
26
160
0
22 Dec 2022
(Psycho-)Linguistic Features Meet Transformer Models for Improved
  Explainable and Controllable Text Simplification
(Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification
Yu Qiao
Xiaofei Li
Daniel Wiechmann
E. Kerz
27
4
0
19 Dec 2022
Achieving Transparency in Distributed Machine Learning with Explainable
  Data Collaboration
Achieving Transparency in Distributed Machine Learning with Explainable Data Collaboration
A. Bogdanova
A. Imakura
T. Sakurai
Tomoya Fujii
Teppei Sakamoto
Hiroyuki Abe
FedML
24
2
0
06 Dec 2022
"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of
  Counterfactual Explanations
"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of Counterfactual Explanations
André Artelt
Barbara Hammer
FaML
33
8
0
27 Nov 2022
A Detailed Study of Interpretability of Deep Neural Network based Top
  Taggers
A Detailed Study of Interpretability of Deep Neural Network based Top Taggers
Ayush Khot
Mark S. Neubauer
Avik Roy
AAML
50
16
0
09 Oct 2022
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
37
28
0
26 Sep 2022
Multi-level Explanation of Deep Reinforcement Learning-based Scheduling
Multi-level Explanation of Deep Reinforcement Learning-based Scheduling
Shaojun Zhang
Chen Wang
Albert Zomaya
OffRL
30
0
0
18 Sep 2022
Making the black-box brighter: interpreting machine learning algorithm
  for forecasting drilling accidents
Making the black-box brighter: interpreting machine learning algorithm for forecasting drilling accidents
E. Gurina
Nikita Klyuchnikov
Ksenia Antipova
D. Koroteev
FAtt
25
8
0
06 Sep 2022
123
Next