Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.14120
Cited By
Neural Basis Models for Interpretability
27 May 2022
Filip Radenovic
Abhimanyu Dubey
D. Mahajan
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Neural Basis Models for Interpretability"
33 / 33 papers shown
Title
Challenges in interpretability of additive models
Xinyu Zhang
Julien Martinelli
S. T. John
AAML
AI4CE
29
1
0
14 Apr 2025
Beyond Black-Box Predictions: Identifying Marginal Feature Effects in Tabular Transformer Networks
Anton Thielmann
Arik Reuter
Benjamin Saefken
LMTD
72
0
0
11 Apr 2025
Inherently Interpretable and Uncertainty-Aware Models for Online Learning in Cyber-Security Problems
Benjamin Kolicic
Alberto Caron
Chris Hicks
V. Mavroudis
AI4CE
47
0
0
14 Nov 2024
BayesNAM: Leveraging Inconsistency for Reliable Explanations
Hoki Kim
Jinseong Park
Yujin Choi
Seungyun Lee
Jaewook Lee
BDL
24
0
0
10 Nov 2024
Generalized Sparse Additive Model with Unknown Link Function
Peipei Yuan
Xinge You
H. Chen
Xuelin Zhang
Qinmu Peng
59
0
0
08 Oct 2024
A Functional Extension of Semi-Structured Networks
David Rügamer
Bernard X. W. Liew
Zainab Altai
Almond Stöcker
26
0
0
07 Oct 2024
ProtoNAM: Prototypical Neural Additive Models for Interpretable Deep Tabular Learning
Guangzhi Xiong
Sanchit Sinha
Aidong Zhang
23
0
0
07 Oct 2024
GAMformer: In-Context Learning for Generalized Additive Models
Andreas Mueller
Julien N. Siems
Harsha Nori
David Salinas
Arber Zela
Rich Caruana
Frank Hutter
AI4CE
33
3
0
06 Oct 2024
META-ANOVA: Screening interactions for interpretable machine learning
Daniel A. Serino
Marc L. Klasky
Chanmoo Park
Dongha Kim
Yongdai Kim
33
0
0
02 Aug 2024
CAT: Interpretable Concept-based Taylor Additive Models
Viet Duong
Qiong Wu
Zhengyi Zhou
Hongjue Zhao
Chenxiang Luo
Eric Zavesky
Huaxiu Yao
Huajie Shao
FAtt
27
2
0
25 Jun 2024
A Benchmarking Study of Kolmogorov-Arnold Networks on Tabular Data
Eleonora Poeta
F. Giobergia
Eliana Pastor
Tania Cerquitelli
Elena Baralis
35
21
0
20 Jun 2024
Kolmogorov-Arnold Networks for Time Series: Bridging Predictive Power and Interpretability
Kunpeng Xu
Lifei Chen
Shengrui Wang
AI4TS
24
53
0
04 Jun 2024
How Video Meetings Change Your Expression
Sumit Sarin
Utkarsh Mall
Purva Tendulkar
Carl Vondrick
CVBM
40
0
0
03 Jun 2024
How Inverse Conditional Flows Can Serve as a Substitute for Distributional Regression
Lucas Kook
Chris Kolb
Philipp Schiele
Daniel Dold
Marcel Arpogaus
...
Philipp F. M. Baumann
Philipp Kopper
Tobias Pielok
Emilio Dorigatti
David Rügamer
BDL
AI4TS
32
1
0
08 May 2024
Shape Arithmetic Expressions: Advancing Scientific Discovery Beyond Closed-Form Equations
Krzysztof Kacprzyk
M. Schaar
27
1
0
15 Apr 2024
Neural Additive Image Model: Interpretation through Interpolation
Arik Reuter
Anton Thielmann
Benjamin Saefken
DiffM
34
1
0
06 Mar 2024
SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models
Lev V. Utkin
Danila Eremenko
A. Konstantinov
23
0
0
11 Dec 2023
FocusLearn: Fully-Interpretable, High-Performance Modular Neural Networks for Time Series
Qiqi Su
Christos Kloukinas
Artur dÁvila Garcez
AI4TS
19
3
0
28 Nov 2023
Multi-Objective Optimization of Performance and Interpretability of Tabular Supervised Machine Learning Models
Lennart Schneider
B. Bischl
Janek Thomas
30
6
0
17 Jul 2023
Improving Neural Additive Models with Bayesian Principles
Kouroche Bouchiat
Alexander Immer
Hugo Yèche
Gunnar Rätsch
Vincent Fortuin
BDL
MedIm
31
6
0
26 May 2023
Backpack Language Models
John Hewitt
John Thickstun
Christopher D. Manning
Percy Liang
KELM
13
16
0
26 May 2023
Curve Your Enthusiasm: Concurvity Regularization in Differentiable Generalized Additive Models
Julien N. Siems
Konstantin Ditschuneit
Winfried Ripken
Alma Lindborg
Maximilian Schambach
Johannes Otterbach
Martin Genzel
19
6
0
19 May 2023
N
A
2
\text{A}^\text{2}
A
2
Q: Neural Attention Additive Model for Interpretable Multi-Agent Q-Learning
Zichuan Liu
Yuanyang Zhu
Chunlin Chen
45
10
0
26 Apr 2023
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
29
0
0
27 Mar 2023
LEURN: Learning Explainable Univariate Rules with Neural Networks
Çağlar Aytekin
FAtt
29
0
0
27 Mar 2023
Multi-dimensional concept discovery (MCD): A unifying framework with completeness guarantees
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
27
37
0
27 Jan 2023
Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean
Anton Thielmann
René-Marcel Kruse
Thomas Kneib
Benjamin Säfken
29
12
0
27 Jan 2023
Extending the Neural Additive Model for Survival Analysis with EHR Data
M. Peroni
Marharyta Kurban
Sun-Young Yang
Young Sun Kim
H. Kang
J. Song
17
6
0
15 Nov 2022
Predicting Treatment Adherence of Tuberculosis Patients at Scale
Mihir Kulkarni
Satvik Golechha
Rishi Raj
J. Sreedharan
Ankit Bhardwaj
...
Jayakrishna Kurada
S. Mattoo
R. Joshi
K. Rade
Alpa Raval
20
1
0
05 Nov 2022
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
38
107
0
02 Oct 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data
Sergei Popov
S. Morozov
Artem Babenko
LMTD
91
294
0
13 Sep 2019
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,220
0
16 Nov 2016
1