Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.10574
Cited By
This Looks Like That: Deep Learning for Interpretable Image Recognition
27 June 2018
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"This Looks Like That: Deep Learning for Interpretable Image Recognition"
50 / 602 papers shown
Title
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
Ruihan Zhang
Prashan Madumal
Tim Miller
Krista A. Ehinger
Benjamin I. P. Rubinstein
FAtt
19
94
0
27 Jun 2020
Crop Yield Prediction Integrating Genotype and Weather Variables Using Deep Learning
Johnathon Shook
Tryambak Gangopadhyay
Linjian Wu
Baskar Ganapathysubramanian
S. Sarkar
Asheesh K. Singh
24
131
0
24 Jun 2020
Towards Robust Fine-grained Recognition by Maximal Separation of Discriminative Features
K. K. Nakka
Mathieu Salzmann
AAML
22
6
0
10 Jun 2020
Evaluation of Similarity-based Explanations
Kazuaki Hanawa
Sho Yokoi
Satoshi Hara
Kentaro Inui
XAI
6
2
0
08 Jun 2020
A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
9
9
0
06 Jun 2020
Kernel Self-Attention in Deep Multiple Instance Learning
Dawid Rymarczyk
Adriana Borowa
Jacek Tabor
Bartosz Zieliñski
SSL
14
5
0
25 May 2020
Focus Longer to See Better:Recursively Refined Attention for Fine-Grained Image Classification
Prateek Shroff
Tianlong Chen
Yunchao Wei
Zhangyang Wang
6
12
0
22 May 2020
An analysis on the use of autoencoders for representation learning: fundamentals, learning task case studies, explainability and challenges
D. Charte
F. Charte
M. J. D. Jesus
Francisco Herrera
SSL
OOD
13
51
0
21 May 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
12
138
0
21 May 2020
P2ExNet: Patch-based Prototype Explanation Network
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
8
3
0
05 May 2020
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase
Joey Tianyi Zhou
FAtt
29
298
0
04 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
Semi-Lexical Languages -- A Formal Basis for Unifying Machine Learning and Symbolic Reasoning in Computer Vision
Briti Gangopadhyay
S. Hazra
P. Dasgupta
NAI
23
0
0
25 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
26
8
0
23 Apr 2020
CoInGP: Convolutional Inpainting with Genetic Programming
D. Jakobović
L. Manzoni
L. Mariot
S. Picek
Mauro Castelli
8
2
0
23 Apr 2020
Games for Fairness and Interpretability
Eric Chu
Nabeel Gillani
S. Makini
FaML
4
4
0
20 Apr 2020
Neural encoding and interpretation for high-level visual cortices based on fMRI using image caption features
Kai Qiao
Chi Zhang
Jian Chen
Linyuan Wang
Li Tong
Bin Yan
14
3
0
26 Mar 2020
Using Wavelets to Analyze Similarities in Image-Classification Datasets
Roozbeh Yousefzadeh
21
0
0
24 Feb 2020
Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example
Serena Booth
Yilun Zhou
Ankit J. Shah
J. Shah
BDL
12
2
0
19 Feb 2020
Explainable Deep Modeling of Tabular Data using TableGraphNet
G. Terejanu
Jawad Chowdhury
Rezaur Rashid
Asif J. Chowdhury
LMTD
FAtt
14
3
0
12 Feb 2020
Self-explaining AI as an alternative to interpretable AI
Daniel C. Elton
6
56
0
12 Feb 2020
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
31
314
0
05 Feb 2020
DANCE: Enhancing saliency maps using decoys
Y. Lu
Wenbo Guo
Xinyu Xing
William Stafford Noble
AAML
32
14
0
03 Feb 2020
Localizing Interpretable Multi-scale informative Patches Derived from Media Classification Task
Chuanguang Yang
Zhulin An
Xiaolong Hu
Hui Zhu
Yongjun Xu
12
0
0
31 Jan 2020
Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
Riccardo Guidotti
A. Monreale
Stan Matwin
D. Pedreschi
FAtt
8
67
0
27 Jan 2020
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
32
207
0
15 Jan 2020
Universal Differential Equations for Scientific Machine Learning
Christopher Rackauckas
Yingbo Ma
Julius Martensen
Collin Warner
K. Zubov
R. Supekar
Dominic J. Skinner
Ali Ramadhan
Alan Edelman
AI4CE
33
569
0
13 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
Deep neural network models for computational histopathology: A survey
C. Srinidhi
Ozan Ciga
Anne L. Martel
AI4CE
30
563
0
28 Dec 2019
Balancing the Tradeoff Between Clustering Value and Interpretability
Sandhya Saisubramanian
Sainyam Galhotra
S. Zilberstein
16
40
0
17 Dec 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Interpretable and Steerable Sequence Learning via Prototypes
Yao Ming
Panpan Xu
Huamin Qu
Liu Ren
AI4TS
10
138
0
23 Jul 2019
Interpretable Image Recognition with Hierarchical Prototypes
Peter Hase
Chaofan Chen
Oscar Li
Cynthia Rudin
VLM
17
109
0
25 Jun 2019
Understanding artificial intelligence ethics and safety
David Leslie
FaML
AI4TS
22
345
0
11 Jun 2019
The Secrets of Machine Learning: Ten Things You Wish You Had Known Earlier to be More Effective at Data Analysis
Cynthia Rudin
David Carlson
HAI
17
34
0
04 Jun 2019
Analyzing the Interpretability Robustness of Self-Explaining Models
Haizhong Zheng
Earlence Fernandes
A. Prakash
AAML
LRM
16
7
0
27 May 2019
The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
Mark T. Keane
Eoin M. Kenny
24
13
0
20 May 2019
How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
Mark T. Keane
Eoin M. Kenny
13
82
0
17 May 2019
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model
Tong Wang
Qihang Lin
35
19
0
10 May 2019
AI Enabling Technologies: A Survey
V. Gadepally
Justin A. Goodwin
J. Kepner
Albert Reuther
Hayley Reynolds
S. Samsi
Jonathan Su
David Martinez
27
24
0
08 May 2019
Explaining Deep Classification of Time-Series Data with Learned Prototypes
Alan H. Gee
Diego Garcia-Olano
Joydeep Ghosh
D. Paydarfar
AI4TS
16
66
0
18 Apr 2019
A Categorisation of Post-hoc Explanations for Predictive Models
John Mitros
Brian Mac Namee
XAI
CML
16
0
0
04 Apr 2019
Interpreting Neural Networks Using Flip Points
Roozbeh Yousefzadeh
D. O’Leary
AAML
FAtt
22
17
0
21 Mar 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSL
FAtt
26
557
0
20 Mar 2019
ProtoAttend: Attention-Based Prototypical Learning
Sercan Ö. Arik
Tomas Pfister
19
19
0
17 Feb 2019
Explaining Explanations to Society
Leilani H. Gilpin
Cecilia Testart
Nathaniel Fruchter
Julius Adebayo
XAI
24
34
0
19 Jan 2019
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
Cynthia Rudin
ELM
FaML
14
217
0
26 Nov 2018
Embedding Deep Networks into Visual Explanations
Zhongang Qi
Saeed Khorram
Fuxin Li
24
27
0
15 Sep 2017
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Learning Optimized Risk Scores
Berk Ustun
Cynthia Rudin
17
82
0
01 Oct 2016
Previous
1
2
3
...
11
12
13
Next