Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.13567
Cited By
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
21 April 2024
Abhilekha Dalal
R. Rayan
Adrita Barua
Eugene Y. Vasserman
Md Kamruzzaman Sarker
Pascal Hitzler
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis"
32 / 32 papers shown
Title
Aligning Generalisation Between Humans and Machines
Filip Ilievski
Barbara Hammer
F. V. Harmelen
Benjamin Paassen
S. Saralajew
...
Vered Shwartz
Gabriella Skitalinskaya
Clemens Stachl
Gido M. van de Ven
T. Villmann
204
1
0
23 Nov 2024
Label-Free Concept Bottleneck Models
Tuomas P. Oikarinen
Subhro Das
Lam M. Nguyen
Tsui-Wei Weng
57
167
0
12 Apr 2023
GPT-4 Technical Report
OpenAI OpenAI
OpenAI Josh Achiam
Steven Adler
Sandhini Agarwal
Lama Ahmad
...
Shengjia Zhao
Tianhao Zheng
Juntang Zhuang
William Zhuk
Barret Zoph
LLMAG
MLLM
625
13,788
0
15 Mar 2023
Towards Human-Compatible XAI: Explaining Data Differentials with Concept Induction over Background Knowledge
Cara L. Widmer
Md Kamruzzaman Sarker
Srikanth Nadella
Joshua L. Fiechter
I. Juvina
B. Minnery
Pascal Hitzler
Joshua Schwartz
M. Raymer
64
7
0
27 Sep 2022
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
Jonathan Crabbé
M. Schaar
67
48
0
22 Sep 2022
CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks
Tuomas P. Oikarinen
Tsui-Wei Weng
VLM
27
83
1
23 Apr 2022
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
Siham Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
102
77
0
24 Apr 2021
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
681
28,659
0
26 Feb 2021
Understanding the Role of Individual Units in a Deep Neural Network
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
GAN
42
446
0
10 Sep 2020
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
Ruihan Zhang
Prashan Madumal
Tim Miller
Krista A. Ehinger
Benjamin I. P. Rubinstein
FAtt
26
100
0
27 Jun 2020
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
59
813
0
06 Nov 2019
Efficient Concept Induction for Description Logics
Md Kamruzzaman Sarker
Pascal Hitzler
29
38
0
08 Dec 2018
Unified Perceptual Parsing for Scene Understanding
Tete Xiao
Yingcheng Liu
Bolei Zhou
Yuning Jiang
Jian Sun
OCL
VOS
116
1,859
0
26 Jul 2018
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
52
524
0
21 Jun 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
164
1,828
0
30 Nov 2017
Interpreting Deep Visual Representations via Network Dissection
Bolei Zhou
David Bau
A. Oliva
Antonio Torralba
FAtt
MILM
50
324
0
15 Nov 2017
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
79
683
0
02 Nov 2017
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
81
2,332
0
01 Nov 2017
Explaining Trained Neural Networks with Semantic Web Technologies: First Steps
Md Kamruzzaman Sarker
Ning Xie
Derek Doran
M. Raymer
Pascal Hitzler
3DV
31
64
0
11 Oct 2017
What Does Explainable AI Really Mean? A New Conceptualization of Perspectives
Derek Doran
Sarah Schulz
Tarek R. Besold
XAI
63
438
0
02 Oct 2017
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
181
2,215
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
546
21,613
0
22 May 2017
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
123
3,848
0
10 Apr 2017
Grad-CAM: Why did you say that?
Ramprasaath R. Selvaraju
Abhishek Das
Ramakrishna Vedantam
Michael Cogswell
Devi Parikh
Dhruv Batra
FAtt
43
469
0
22 Nov 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
216
19,796
0
07 Oct 2016
Semantic Understanding of Scenes through the ADE20K Dataset
Bolei Zhou
Hang Zhao
Xavier Puig
Tete Xiao
Sanja Fidler
Adela Barriuso
Antonio Torralba
SSeg
329
1,850
0
18 Aug 2016
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
288
10,149
0
16 Mar 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
587
16,828
0
16 Feb 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.4K
192,638
0
10 Dec 2015
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DV
BDL
495
27,231
0
02 Dec 2015
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
943
99,991
0
04 Sep 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
194
7,252
0
20 Dec 2013
1