Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.01743
Cited By
Designing Inherently Interpretable Machine Learning Models
2 November 2021
Agus Sudjianto
Aijun Zhang
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Designing Inherently Interpretable Machine Learning Models"
21 / 21 papers shown
Title
Explainable post-training bias mitigation with distribution-based fairness metrics
Ryan Franks
A. Miroshnikov
37
0
0
01 Apr 2025
Inherently Interpretable Tree Ensemble Learning
Zebin Yang
Agus Sudjianto
Xiaoming Li
Aijun Zhang
AI4CE
28
0
0
24 Oct 2024
Less Discriminatory Alternative and Interpretable XGBoost Framework for Binary Classification
Andrew Pangia
Agus Sudjianto
Aijun Zhang
Taufiquar Khan
FaML
33
1
0
24 Oct 2024
Space-scale Exploration of the Poor Reliability of Deep Learning Models: the Case of the Remote Sensing of Rooftop Photovoltaic Systems
Gabriel Kasmi
L. Dubus
Yves-Marie Saint Drenan
Philippe Blanc
40
0
0
31 Jul 2024
CHILLI: A data context-aware perturbation method for XAI
Saif Anwar
Nathan Griffiths
A. Bhalerao
T. Popham
44
0
0
10 Jul 2024
Are Logistic Models Really Interpretable?
Danial Dervovic
Freddy Lecue
Nicolas Marchesotti
Daniele Magazzeni
35
0
0
19 Jun 2024
Explainable Interface for Human-Autonomy Teaming: A Survey
Xiangqi Kong
Yang Xing
Antonios Tsourdos
Ziyue Wang
Weisi Guo
Adolfo Perrusquía
Andreas Wikander
43
3
0
04 May 2024
Opening the Black Box: Towards inherently interpretable energy data imputation models using building physics insight
Antonio Liguori
Matias Quintana
Chun Fu
Clayton Miller
J. Frisch
C. Treeck
AI4CE
10
5
0
28 Nov 2023
A Comprehensive Review on Financial Explainable AI
Wei Jie Yeo
Wihan van der Heever
Rui Mao
Min Zhang
Ranjan Satapathy
G. Mengaldo
XAI
AI4TS
32
15
0
21 Sep 2023
Interpreting and generalizing deep learning in physics-based problems with functional linear models
Amirhossein Arzani
Lingxiao Yuan
P. Newell
Bei Wang
AI4CE
31
7
0
10 Jul 2023
Sound Explanation for Trustworthy Machine Learning
Kai Jia
Pasapol Saowakon
L. Appelbaum
Martin Rinard
XAI
FAtt
FaML
24
2
0
08 Jun 2023
Interpretable Machine Learning based on Functional ANOVA Framework: Algorithms and Comparisons
Linwei Hu
V. Nair
Agus Sudjianto
Aijun Zhang
Jie Chen
30
8
0
25 May 2023
PiML Toolbox for Interpretable Machine Learning Model Development and Diagnostics
Agus Sudjianto
Aijun Zhang
Zebin Yang
Yuhao Su
Ningzhou Zeng
29
6
0
07 May 2023
Semantics, Ontology and Explanation
G. Guizzardi
Nicola Guarino
10
7
0
21 Apr 2023
Interpretable (not just posthoc-explainable) heterogeneous survivor bias-corrected treatment effects for assignment of postdischarge interventions to prevent readmissions
Hongjing Xia
Joshua C. Chang
S. Nowak
Sonya Mahajan
R. Mahajan
Ted L. Chang
Carson C. Chow
38
1
0
19 Apr 2023
A Comparison of Modeling Preprocessing Techniques
Tosan Johnson
A. J. Liu
S. Raza
Aaron McGuire
9
1
0
23 Feb 2023
On marginal feature attributions of tree-based models
Khashayar Filom
A. Miroshnikov
Konstandinos Kotsiopoulos
Arjun Ravi Kannan
FAtt
22
3
0
16 Feb 2023
Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life
Kazuma Kobayashi
S. B. Alam
19
49
0
17 Jan 2023
Autoencoded sparse Bayesian in-IRT factorization, calibration, and amortized inference for the Work Disability Functional Assessment Battery
Joshua C. Chang
Carson C. Chow
Julia Porcino
39
1
0
20 Oct 2022
Monotonic Neural Additive Models: Pursuing Regulated Machine Learning Models for Credit Scoring
Dangxing Chen
Weicheng Ye
FaML
32
13
0
21 Sep 2022
Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to prevent avoidable all-cause readmissions or death
Joshua C. Chang
Ted L. Chang
Carson C. Chow
R. Mahajan
Sonya Mahajan
Joe Maisog
Shashaank Vattikuti
Hongjing Xia
FAtt
OOD
37
0
0
28 Aug 2022
1