ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.17931
  4. Cited By
CAT: Interpretable Concept-based Taylor Additive Models
v1v2v3 (latest)

CAT: Interpretable Concept-based Taylor Additive Models

25 June 2024
Viet Duong
Qiong Wu
Zhengyi Zhou
Hongjue Zhao
Chenxiang Luo
Eric Zavesky
Huaxiu Yao
Huajie Shao
    FAtt
ArXiv (abs)PDFHTML

Papers citing "CAT: Interpretable Concept-based Taylor Additive Models"

18 / 18 papers shown
Title
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
M. Zarlenga
Pietro Barbiero
Gabriele Ciravegna
G. Marra
Francesco Giannini
...
F. Precioso
S. Melacci
Adrian Weller
Pietro Lio
M. Jamnik
138
59
0
19 Sep 2022
Neural Basis Models for Interpretability
Neural Basis Models for Interpretability
Filip Radenovic
Abhimanyu Dubey
D. Mahajan
FAtt
109
47
0
27 May 2022
Post-hoc Interpretability for Neural NLP: A Survey
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen
Siva Reddy
A. Chandar
XAI
84
233
0
10 Aug 2021
Promises and Pitfalls of Black-Box Concept Learning Models
Promises and Pitfalls of Black-Box Concept Learning Models
Anita Mahinpei
Justin Clark
Isaac Lage
Finale Doshi-Velez
Weiwei Pan
91
96
0
24 Jun 2021
NODE-GAM: Neural Generalized Additive Model for Interpretable Deep
  Learning
NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning
C. Chang
R. Caruana
Anna Goldenberg
AI4CE
93
80
0
03 Jun 2021
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
102
162
0
11 Aug 2020
Neural Additive Models: Interpretable Machine Learning with Neural Nets
Neural Additive Models: Interpretable Machine Learning with Neural Nets
Rishabh Agarwal
Levi Melnick
Nicholas Frosst
Xuezhou Zhang
Ben Lengerich
R. Caruana
Geoffrey E. Hinton
99
420
0
29 Apr 2020
Concept Whitening for Interpretable Image Recognition
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
94
322
0
05 Feb 2020
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAttAAMLMLAU
81
822
0
06 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
135
6,321
0
22 Oct 2019
MaskGAN: Towards Diverse and Interactive Facial Image Manipulation
MaskGAN: Towards Diverse and Interactive Facial Image Manipulation
Cheng-Han Lee
Ziwei Liu
Lingyun Wu
Ping Luo
CVBM
175
1,076
0
27 Jul 2019
Global and Local Interpretability for Cardiac MRI Classification
Global and Local Interpretability for Cardiac MRI Classification
J. Clough
Ilkay Oksuz
Esther Puyol-Antón
B. Ruijsink
A. King
Julia A. Schnabel
79
60
0
14 Jun 2019
Semantic bottleneck for computer vision tasks
Semantic bottleneck for computer vision tasks
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
89
17
0
06 Nov 2018
Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language
  Understanding
Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding
Kexin Yi
Jiajun Wu
Chuang Gan
Antonio Torralba
Pushmeet Kohli
J. Tenenbaum
NAI
84
612
0
04 Oct 2018
Disentangling by Factorising
Disentangling by Factorising
Hyunjik Kim
A. Mnih
CoGeOOD
70
1,356
0
16 Feb 2018
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,090
0
22 May 2017
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
88
839
0
16 Jun 2016
XGBoost: A Scalable Tree Boosting System
XGBoost: A Scalable Tree Boosting System
Tianqi Chen
Carlos Guestrin
817
39,255
0
09 Mar 2016
1