Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1909.08442
Cited By
Semantically Interpretable Activation Maps: what-where-how explanations within CNNs
18 September 2019
Diego Marcos
Sylvain Lobry
D. Tuia
FAtt
MILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Semantically Interpretable Activation Maps: what-where-how explanations within CNNs"
8 / 8 papers shown
Title
Causal Intersectionality and Dual Form of Gradient Descent for Multimodal Analysis: a Case Study on Hateful Memes
Yosuke Miyanishi
Minh Le Nguyen
34
2
0
19 Aug 2023
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
32
28
0
14 Mar 2023
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
Mikolaj Sacha
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
VLM
38
29
0
28 Jan 2023
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
29
140
0
17 May 2021
Towards a Collective Agenda on AI for Earth Science Data Analysis
D. Tuia
R. Roscher
Jan Dirk Wegner
Nathan Jacobs
Xiaoxiang Zhu
Gustau Camps-Valls
AI4CE
39
68
0
11 Apr 2021
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
20
27
0
18 Sep 2020
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Do semantic parts emerge in Convolutional Neural Networks?
Abel Gonzalez-Garcia
Davide Modolo
V. Ferrari
158
113
0
13 Jul 2016
1