ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.10574
  4. Cited By
This Looks Like That: Deep Learning for Interpretable Image Recognition

This Looks Like That: Deep Learning for Interpretable Image Recognition

27 June 2018
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
ArXivPDFHTML

Papers citing "This Looks Like That: Deep Learning for Interpretable Image Recognition"

50 / 602 papers shown
Title
Concept-Centric Transformers: Enhancing Model Interpretability through
  Object-Centric Concept Learning within a Shared Global Workspace
Concept-Centric Transformers: Enhancing Model Interpretability through Object-Centric Concept Learning within a Shared Global Workspace
Jinyung Hong
Keun Hee Park
Theodore P. Pavlic
29
5
0
25 May 2023
On the Impact of Knowledge Distillation for Model Interpretability
On the Impact of Knowledge Distillation for Model Interpretability
Hyeongrok Han
Siwon Kim
Hyun-Soo Choi
Sungroh Yoon
24
4
0
25 May 2023
Fantastic DNN Classifiers and How to Identify them without Data
Fantastic DNN Classifiers and How to Identify them without Data
Nathaniel R. Dean
D. Sarkar
21
1
0
24 May 2023
Towards credible visual model interpretation with path attribution
Towards credible visual model interpretation with path attribution
Naveed Akhtar
Muhammad A. A. K. Jalwana
FAtt
22
5
0
23 May 2023
What Symptoms and How Long? An Interpretable AI Approach for Depression
  Detection in Social Media
What Symptoms and How Long? An Interpretable AI Approach for Depression Detection in Social Media
Junwei Kuang
Jiaheng Xie
Zhijun Yan
27
1
0
18 May 2023
FICNN: A Framework for the Interpretation of Deep Convolutional Neural
  Networks
FICNN: A Framework for the Interpretation of Deep Convolutional Neural Networks
Hamed Behzadi-Khormouji
José Oramas
16
0
0
17 May 2023
Tackling Interpretability in Audio Classification Networks with
  Non-negative Matrix Factorization
Tackling Interpretability in Audio Classification Networks with Non-negative Matrix Factorization
Jayneel Parekh
Sanjeel Parekh
Pavlo Mozharovskyi
Gaël Richard
Florence dÁlché-Buc
33
6
0
11 May 2023
Evaluating Post-hoc Interpretability with Intrinsic Interpretability
Evaluating Post-hoc Interpretability with Intrinsic Interpretability
J. P. Amorim
P. Abreu
João A. M. Santos
Henning Muller
FAtt
25
1
0
04 May 2023
Discover and Cure: Concept-aware Mitigation of Spurious Correlation
Discover and Cure: Concept-aware Mitigation of Spurious Correlation
Shirley Wu
Mert Yuksekgonul
Linjun Zhang
James Zou
78
56
0
01 May 2023
Interpretability of Machine Learning: Recent Advances and Future
  Prospects
Interpretability of Machine Learning: Recent Advances and Future Prospects
Lei Gao
L. Guan
AAML
43
31
0
30 Apr 2023
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based
  Comparison of Feature Spaces
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces
Georgii Mikriukov
Gesina Schwalbe
Christian Hellert
Korinna Bade
21
2
0
30 Apr 2023
TABLET: Learning From Instructions For Tabular Data
TABLET: Learning From Instructions For Tabular Data
Dylan Slack
Sameer Singh
LMTD
ALM
RALM
26
17
0
25 Apr 2023
Learning Bottleneck Concepts in Image Classification
Learning Bottleneck Concepts in Image Classification
Bowen Wang
Liangzhi Li
Yuta Nakashima
Hajime Nagahara
SSL
20
42
0
20 Apr 2023
Evaluating the Robustness of Interpretability Methods through
  Explanation Invariance and Equivariance
Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance
Jonathan Crabbé
M. Schaar
AAML
24
6
0
13 Apr 2023
MProtoNet: A Case-Based Interpretable Model for Brain Tumor
  Classification with 3D Multi-parametric Magnetic Resonance Imaging
MProtoNet: A Case-Based Interpretable Model for Brain Tumor Classification with 3D Multi-parametric Magnetic Resonance Imaging
Yuanyuan Wei
Roger Tam
Xiaoying Tang
MedIm
16
12
0
13 Apr 2023
MLOps Spanning Whole Machine Learning Life Cycle: A Survey
MLOps Spanning Whole Machine Learning Life Cycle: A Survey
Fang Zhengxin
Yuan Yi
Zhang Jingyu
Liu Yue
Mu Yuechen
...
Xu Xiwei
Wang Jeff
Wang Chen
Zhang Shuai
Chen Shiping
16
4
0
13 Apr 2023
Localizing Model Behavior with Path Patching
Localizing Model Behavior with Path Patching
Nicholas W. Goldowsky-Dill
Chris MacLeod
L. Sato
Aryaman Arora
31
85
0
12 Apr 2023
Deep Prototypical-Parts Ease Morphological Kidney Stone Identification
  and are Competitively Robust to Photometric Perturbations
Deep Prototypical-Parts Ease Morphological Kidney Stone Identification and are Competitively Robust to Photometric Perturbations
Daniel Flores-Araiza
F. Lopez-Tiro
Jonathan El Beze
Jacques Hubert
M. González-Mendoza
G. Ochoa-Ruiz
C. Daul
MedIm
OOD
32
6
0
08 Apr 2023
Machine Learning with Requirements: a Manifesto
Machine Learning with Requirements: a Manifesto
Eleonora Giunchiglia
F. Imrie
M. Schaar
Thomas Lukasiewicz
AI4TS
OffRL
VLM
32
5
0
07 Apr 2023
An Interpretable Loan Credit Evaluation Method Based on Rule
  Representation Learner
An Interpretable Loan Credit Evaluation Method Based on Rule Representation Learner
Zi-yu Chen
Xiaomeng Wang
Yuanjiang Huang
Tao Jia
31
1
0
03 Apr 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
52
3
0
30 Mar 2023
Are Data-driven Explanations Robust against Out-of-distribution Data?
Are Data-driven Explanations Robust against Out-of-distribution Data?
Tang Li
Fengchun Qiao
Mengmeng Ma
Xiangkai Peng
OODD
OOD
33
10
0
29 Mar 2023
Analyzing Effects of Mixed Sample Data Augmentation on Model
  Interpretability
Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability
Soyoun Won
Sung-Ho Bae
Seong Tae Kim
50
2
0
26 Mar 2023
LINe: Out-of-Distribution Detection by Leveraging Important Neurons
LINe: Out-of-Distribution Detection by Leveraging Important Neurons
Yong Hyun Ahn
Gyeong-Moon Park
Seong Tae Kim
OODD
119
31
0
24 Mar 2023
Take 5: Interpretable Image Classification with a Handful of Features
Take 5: Interpretable Image Classification with a Handful of Features
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
40
7
0
23 Mar 2023
Divide and Conquer: Answering Questions with Object Factorization and
  Compositional Reasoning
Divide and Conquer: Answering Questions with Object Factorization and Compositional Reasoning
Shi Chen
Qi Zhao
47
5
0
18 Mar 2023
ICICLE: Interpretable Class Incremental Continual Learning
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
32
28
0
14 Mar 2023
Don't PANIC: Prototypical Additive Neural Network for Interpretable
  Classification of Alzheimer's Disease
Don't PANIC: Prototypical Additive Neural Network for Interpretable Classification of Alzheimer's Disease
Tom Nuno Wolf
Sebastian Polsterl
Christian Wachinger
FAtt
24
6
0
13 Mar 2023
A Test Statistic Estimation-based Approach for Establishing
  Self-interpretable CNN-based Binary Classifiers
A Test Statistic Estimation-based Approach for Establishing Self-interpretable CNN-based Binary Classifiers
S. Sengupta
M. Anastasio
MedIm
33
6
0
13 Mar 2023
Schema Inference for Interpretable Image Classification
Schema Inference for Interpretable Image Classification
Haofei Zhang
Mengqi Xue
Xiaokang Liu
Kaixuan Chen
Jie Song
Mingli Song
OCL
44
3
0
12 Mar 2023
Learning Human-Compatible Representations for Case-Based Decision
  Support
Learning Human-Compatible Representations for Case-Based Decision Support
Han Liu
Yizhou Tian
Chacha Chen
Shi Feng
Yuxin Chen
Chenhao Tan
25
5
0
06 Mar 2023
Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
  Domain Adaptation
Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised Domain Adaptation
Wenxi Xiao
Zhengming Ding
Hongfu Liu
FAtt
11
2
0
04 Mar 2023
Finding the right XAI method -- A Guide for the Evaluation and Ranking
  of Explainable AI Methods in Climate Science
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
P. Bommer
M. Kretschmer
Anna Hedström
Dilyara Bareeva
Marina M.-C. Höhne
46
38
0
01 Mar 2023
Inherently Interpretable Multi-Label Classification Using Class-Specific
  Counterfactuals
Inherently Interpretable Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
35
16
0
01 Mar 2023
TAX: Tendency-and-Assignment Explainer for Semantic Segmentation with
  Multi-Annotators
TAX: Tendency-and-Assignment Explainer for Semantic Segmentation with Multi-Annotators
Yuan Cheng
Zu-Yun Shiau
Fu-En Yang
Yu-Chiang Frank Wang
39
2
0
19 Feb 2023
Invisible Users: Uncovering End-Users' Requirements for Explainable AI
  via Explanation Forms and Goals
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
28
7
0
10 Feb 2023
Symbolic Metamodels for Interpreting Black-boxes Using Primitive
  Functions
Symbolic Metamodels for Interpreting Black-boxes Using Primitive Functions
Mahed Abroshan
Saumitra Mishra
Mohammad Mahdi Khalili
33
4
0
09 Feb 2023
Red Teaming Deep Neural Networks with Feature Synthesis Tools
Red Teaming Deep Neural Networks with Feature Synthesis Tools
Stephen Casper
Yuxiao Li
Jiawei Li
Tong Bu
Ke Zhang
K. Hariharan
Dylan Hadfield-Menell
AAML
32
15
0
08 Feb 2023
Stop overkilling simple tasks with black-box models and use transparent models instead
Matteo Rizzo
Matteo Marcuzzo
A. Zangari
A. Gasparetto
A. Albarelli
VLM
21
0
0
06 Feb 2023
Neural Insights for Digital Marketing Content Design
Neural Insights for Digital Marketing Content Design
F. Kong
Yuan Li
Houssam Nassif
Tanner Fiez
Ricardo Henao
Shreya Chakrabarti
3DV
19
10
0
02 Feb 2023
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
Naveed Akhtar
XAI
VLM
30
7
0
31 Jan 2023
BRAIxDet: Learning to Detect Malignant Breast Lesion with Incomplete
  Annotations
BRAIxDet: Learning to Detect Malignant Breast Lesion with Incomplete Annotations
Yuanhong Chen
Yuyuan Liu
Chong Wang
M. Elliott
C. Kwok
...
Yu Tian
Fengbei Liu
Helen Frazer
Davis J. McCarthy
Gustavo Carneiro
28
3
0
31 Jan 2023
NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification
NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification
Parth Padalkar
Huaduo Wang
Gopal Gupta
21
2
0
30 Jan 2023
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
Mikolaj Sacha
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
VLM
32
29
0
28 Jan 2023
Multi-dimensional concept discovery (MCD): A unifying framework with
  completeness guarantees
Multi-dimensional concept discovery (MCD): A unifying framework with completeness guarantees
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
27
37
0
27 Jan 2023
Sanity checks and improvements for patch visualisation in
  prototype-based image classification
Sanity checks and improvements for patch visualisation in prototype-based image classification
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
10
3
0
20 Jan 2023
Learning Support and Trivial Prototypes for Interpretable Image
  Classification
Learning Support and Trivial Prototypes for Interpretable Image Classification
Chong Wang
Yuyuan Liu
Yuanhong Chen
Fengbei Liu
Yu Tian
Davis J. McCarthy
Helen Frazer
G. Carneiro
34
24
0
08 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
39
2
0
03 Jan 2023
Hierarchical Explanations for Video Action Recognition
Hierarchical Explanations for Video Action Recognition
Sadaf Gulshad
Teng Long
Nanne van Noord
FAtt
23
6
0
01 Jan 2023
A Theoretical Framework for AI Models Explainability with Application in
  Biomedicine
A Theoretical Framework for AI Models Explainability with Application in Biomedicine
Matteo Rizzo
Alberto Veneri
A. Albarelli
Claudio Lucchese
Marco Nobile
Cristina Conati
XAI
25
8
0
29 Dec 2022
Previous
123...567...111213
Next