ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.04592
  4. Cited By
Interpretable machine learning: definitions, methods, and applications

Interpretable machine learning: definitions, methods, and applications

14 January 2019
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
    XAI
    HAI
ArXivPDFHTML

Papers citing "Interpretable machine learning: definitions, methods, and applications"

50 / 329 papers shown
Title
Zyxin is all you need: machine learning adherent cell mechanics
Zyxin is all you need: machine learning adherent cell mechanics
Matthew S. Schmitt
Jonathan Colen
S. Sala
J. Devany
Shailaja Seetharaman
M. Gardel
Patrick W. Oakes
Vincenzo Vitelli
AI4CE
19
9
0
01 Mar 2023
Structural Neural Additive Models: Enhanced Interpretable Machine
  Learning
Structural Neural Additive Models: Enhanced Interpretable Machine Learning
Mattias Luber
Anton Thielmann
Benjamin Säfken
31
7
0
18 Feb 2023
Derandomized Novelty Detection with FDR Control via Conformal E-values
Derandomized Novelty Detection with FDR Control via Conformal E-values
Meshi Bashari
Amir Epstein
Yaniv Romano
Matteo Sesia
26
13
0
14 Feb 2023
A novel approach to generate datasets with XAI ground truth to evaluate
  image models
A novel approach to generate datasets with XAI ground truth to evaluate image models
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
22
4
0
11 Feb 2023
Personalized Interpretable Classification
Personalized Interpretable Classification
Zengyou He
Yifan Tang
Yifan Tang
Lianyu Hu
Yan Liu
Yan Liu
25
0
0
06 Feb 2023
Benchmarking sparse system identification with low-dimensional chaos
Benchmarking sparse system identification with low-dimensional chaos
A. Kaptanoglu
Lanyue Zhang
Zachary G. Nicolaou
Urban Fasel
Steven L. Brunton
40
20
0
04 Feb 2023
The Contextual Lasso: Sparse Linear Models via Deep Neural Networks
The Contextual Lasso: Sparse Linear Models via Deep Neural Networks
Ryan Thompson
Amir Dezfouli
Robert Kohn
31
4
0
02 Feb 2023
Explainable Deep Reinforcement Learning: State of the Art and Challenges
Explainable Deep Reinforcement Learning: State of the Art and Challenges
G. Vouros
XAI
50
76
0
24 Jan 2023
A Rigorous Uncertainty-Aware Quantification Framework Is Essential for
  Reproducible and Replicable Machine Learning Workflows
A Rigorous Uncertainty-Aware Quantification Framework Is Essential for Reproducible and Replicable Machine Learning Workflows
Line C. Pouchard
Kristofer G. Reyes
Francis J. Alexander
Byung-Jun Yoon
27
2
0
13 Jan 2023
XDQN: Inherently Interpretable DQN through Mimicking
XDQN: Inherently Interpretable DQN through Mimicking
A. Kontogiannis
G. Vouros
16
0
0
08 Jan 2023
A Theoretical Framework for AI Models Explainability with Application in
  Biomedicine
A Theoretical Framework for AI Models Explainability with Application in Biomedicine
Matteo Rizzo
Alberto Veneri
A. Albarelli
Claudio Lucchese
Marco Nobile
Cristina Conati
XAI
27
8
0
29 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh V. Chawla
24
1
0
15 Dec 2022
Improving Accuracy Without Losing Interpretability: A ML Approach for
  Time Series Forecasting
Improving Accuracy Without Losing Interpretability: A ML Approach for Time Series Forecasting
Yiqi Sun
Zheng Shi
Jianshen Zhang
Yongzhi Qi
Hao Hu
Zuo-jun Shen
AI4TS
16
0
0
13 Dec 2022
Quant 4.0: Engineering Quantitative Investment with Automated,
  Explainable and Knowledge-driven Artificial Intelligence
Quant 4.0: Engineering Quantitative Investment with Automated, Explainable and Knowledge-driven Artificial Intelligence
Jian Guo
Sai Wang
L. Ni
H. Shum
AIFin
19
8
0
13 Dec 2022
Minimax Optimal Estimation of Stability Under Distribution Shift
Minimax Optimal Estimation of Stability Under Distribution Shift
Hongseok Namkoong
Yuanzhe Ma
Peter Glynn
29
6
0
13 Dec 2022
Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring
Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring
Hué Sullivan
Hurlin Christophe
Pérignon Christophe
Saurin Sébastien
13
0
0
12 Dec 2022
Truthful Meta-Explanations for Local Interpretability of Machine
  Learning Models
Truthful Meta-Explanations for Local Interpretability of Machine Learning Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
18
3
0
07 Dec 2022
On the Pointwise Behavior of Recursive Partitioning and Its Implications
  for Heterogeneous Causal Effect Estimation
On the Pointwise Behavior of Recursive Partitioning and Its Implications for Heterogeneous Causal Effect Estimation
M. D. Cattaneo
Jason M. Klusowski
Peter M. Tian
18
4
0
19 Nov 2022
Deep learning methods for drug response prediction in cancer:
  predominant and emerging trends
Deep learning methods for drug response prediction in cancer: predominant and emerging trends
A. Partin
Thomas Brettin
Yitan Zhu
Oleksandr Narykov
Austin R. Clyde
Jamie Overbeek
Department of Materials Science
6
54
0
18 Nov 2022
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in
  Medicine
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine
A. Chaddad
Qizong Lu
Jiali Li
Y. Katib
R. Kateb
C. Tanougast
Ahmed Bouridane
Ahmed Abdulkadir
OOD
24
38
0
17 Nov 2022
Comparing Explanation Methods for Traditional Machine Learning Models
  Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Montgomery Flora
Corey K. Potvin
A. McGovern
Shawn Handler
FAtt
19
16
0
16 Nov 2022
Using explainability to design physics-aware CNNs for solving subsurface
  inverse problems
Using explainability to design physics-aware CNNs for solving subsurface inverse problems
J. Crocker
Krishna Kumar
B. Cox
17
9
0
16 Nov 2022
An Interpretable Hybrid Predictive Model of COVID-19 Cases using
  Autoregressive Model and LSTM
An Interpretable Hybrid Predictive Model of COVID-19 Cases using Autoregressive Model and LSTM
Yangyi Zhang
Sui Tang
Guo-Ding Yu
24
11
0
14 Nov 2022
Explainable Artificial Intelligence: Precepts, Methods, and
  Opportunities for Research in Construction
Explainable Artificial Intelligence: Precepts, Methods, and Opportunities for Research in Construction
Peter E. D. Love
Weili Fang
J. Matthews
Stuart Porter
Hanbin Luo
L. Ding
XAI
29
7
0
12 Nov 2022
Cross-Subject Emotion Recognition with Sparsely-Labeled Peripheral
  Physiological Data Using SHAP-Explained Tree Ensembles
Cross-Subject Emotion Recognition with Sparsely-Labeled Peripheral Physiological Data Using SHAP-Explained Tree Ensembles
Feng Zhou
Tao Chen
Baiying Lei
24
0
0
05 Nov 2022
SoK: Modeling Explainability in Security Analytics for Interpretability,
  Trustworthiness, and Usability
SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability
Dipkamal Bhusal
Rosalyn Shin
Ajay Ashok Shewale
M. K. Veerabhadran
Michael Clifford
Sara Rampazzi
Nidhi Rastogi
FAtt
AAML
32
5
0
31 Oct 2022
Automatic Diagnosis of Myocarditis Disease in Cardiac MRI Modality using
  Deep Transformers and Explainable Artificial Intelligence
Automatic Diagnosis of Myocarditis Disease in Cardiac MRI Modality using Deep Transformers and Explainable Artificial Intelligence
M. Jafari
A. Shoeibi
Navid Ghassemi
Jónathan Heras
Saiguang Ling
...
Shuihua Wang
R. Alizadehsani
Juan M Gorriz
U. Acharya
Hamid Alinejad-Rokny
MedIm
22
11
0
26 Oct 2022
Convergence Rates of Oblique Regression Trees for Flexible Function
  Libraries
Convergence Rates of Oblique Regression Trees for Flexible Function Libraries
M. D. Cattaneo
Rajita Chandak
Jason M. Klusowski
42
11
0
26 Oct 2022
BELIEF in Dependence: Leveraging Atomic Linearity in Data Bits for
  Rethinking Generalized Linear Models
BELIEF in Dependence: Leveraging Atomic Linearity in Data Bits for Rethinking Generalized Linear Models
Benjamin Brown
Kai Zhang
Xiao-Li Meng
22
2
0
19 Oct 2022
A Survey on Explainable Anomaly Detection
A Survey on Explainable Anomaly Detection
Zhong Li
Yuxuan Zhu
M. Leeuwen
38
73
0
13 Oct 2022
CLIP-PAE: Projection-Augmentation Embedding to Extract Relevant Features
  for a Disentangled, Interpretable, and Controllable Text-Guided Face
  Manipulation
CLIP-PAE: Projection-Augmentation Embedding to Extract Relevant Features for a Disentangled, Interpretable, and Controllable Text-Guided Face Manipulation
Chenliang Zhou
Fangcheng Zhong
Cengiz Öztireli
CLIP
48
19
0
08 Oct 2022
PathFinder: Discovering Decision Pathways in Deep Neural Networks
PathFinder: Discovering Decision Pathways in Deep Neural Networks
Ozan Irsoy
Ethem Alpaydin
FAtt
22
0
0
01 Oct 2022
Towards Human-Compatible XAI: Explaining Data Differentials with Concept
  Induction over Background Knowledge
Towards Human-Compatible XAI: Explaining Data Differentials with Concept Induction over Background Knowledge
Cara L. Widmer
Md Kamruzzaman Sarker
Srikanth Nadella
Joshua L. Fiechter
I. Juvina
B. Minnery
Pascal Hitzler
Joshua Schwartz
M. Raymer
34
7
0
27 Sep 2022
Augmenting Interpretable Models with LLMs during Training
Augmenting Interpretable Models with LLMs during Training
Chandan Singh
Armin Askari
R. Caruana
Jianfeng Gao
36
37
0
23 Sep 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
109
107
0
22 Sep 2022
Explaining Anomalies using Denoising Autoencoders for Financial Tabular
  Data
Explaining Anomalies using Denoising Autoencoders for Financial Tabular Data
Timur Sattarov
Dayananda Herurkar
Jörn Hees
30
8
0
21 Sep 2022
EMaP: Explainable AI with Manifold-based Perturbations
EMaP: Explainable AI with Manifold-based Perturbations
Minh Nhat Vu
Huy Mai
My T. Thai
AAML
35
2
0
18 Sep 2022
Assessment of cognitive characteristics in intelligent systems and
  predictive ability
Assessment of cognitive characteristics in intelligent systems and predictive ability
O. Kubryak
Sergey Kovalchuk
N. Bagdasaryan
11
1
0
16 Sep 2022
Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers
Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers
Behnam Mohammadi
Nikhil Malik
Timothy P. Derdenger
K. Srinivasan
17
1
0
07 Sep 2022
E Pluribus Unum Interpretable Convolutional Neural Networks
E Pluribus Unum Interpretable Convolutional Neural Networks
George Dimas
Eirini Cholopoulou
D. Iakovidis
15
3
0
10 Aug 2022
Differentially Private Counterfactuals via Functional Mechanism
Differentially Private Counterfactuals via Functional Mechanism
Fan Yang
Qizhang Feng
Kaixiong Zhou
Jiahao Chen
Xia Hu
24
8
0
04 Aug 2022
Multi-modal volumetric concept activation to explain detection and
  classification of metastatic prostate cancer on PSMA-PET/CT
Multi-modal volumetric concept activation to explain detection and classification of metastatic prostate cancer on PSMA-PET/CT
Rosa C.J. Kraaijveld
M. Philippens
W. Eppinga
Ina Jurgenliemk-Schulz
K. Gilhuijs
P. Kroon
Bas H. M. van der Velden
MedIm
9
2
0
04 Aug 2022
Topological structure of complex predictions
Topological structure of complex predictions
Meng Liu
T. Dey
D. Gleich
28
3
0
28 Jul 2022
TalkToModel: Explaining Machine Learning Models with Interactive Natural
  Language Conversations
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
Dylan Slack
Satyapriya Krishna
Himabindu Lakkaraju
Sameer Singh
24
74
0
08 Jul 2022
A systematic review of biologically-informed deep learning models for
  cancer: fundamental trends for encoding and interpreting oncology data
A systematic review of biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data
Magdalena Wysocka
Oskar Wysocki
Marie Zufferey
Dónal Landers
André Freitas
AI4CE
48
28
0
02 Jul 2022
Explanatory causal effects for model agnostic explanations
Explanatory causal effects for model agnostic explanations
Jiuyong Li
Ha Xuan Tran
T. Le
Lin Liu
Kui Yu
Jixue Liu
CML
24
1
0
23 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
29
141
0
22 Jun 2022
Extending Process Discovery with Model Complexity Optimization and
  Cyclic States Identification: Application to Healthcare Processes
Extending Process Discovery with Model Complexity Optimization and Cyclic States Identification: Application to Healthcare Processes
Liubov Elkhovskaya
Alexander Kshenin
M. Balakhontceva
Sergey Kovalchuk
9
5
0
10 Jun 2022
EiX-GNN : Concept-level eigencentrality explainer for graph neural
  networks
EiX-GNN : Concept-level eigencentrality explainer for graph neural networks
Adrien Raison
Pascal Bourdon
David Helbert
14
1
0
07 Jun 2022
Disentangling Epistemic and Aleatoric Uncertainty in Reinforcement
  Learning
Disentangling Epistemic and Aleatoric Uncertainty in Reinforcement Learning
Bertrand Charpentier
Ransalu Senanayake
Mykel Kochenderfer
Stephan Günnemann
PER
UD
50
24
0
03 Jun 2022
Previous
1234567
Next