ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.07663
  4. Cited By
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based
  Comparison of Feature Spaces

Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces

30 April 2023
Georgii Mikriukov
Gesina Schwalbe
Christian Hellert
Korinna Bade
ArXivPDFHTML

Papers citing "Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces"

20 / 20 papers shown
Title
HINT: Hierarchical Neuron Concept Explainer
HINT: Hierarchical Neuron Concept Explainer
Andong Wang
Wei-Ning Lee
Xiaojuan Qi
55
19
0
27 Mar 2022
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
46
28
0
25 Mar 2022
Enabling Verification of Deep Neural Networks in Perception Tasks Using
  Fuzzy Logic and Concept Embeddings
Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings
Gesina Schwalbe
Christian Wirth
Ute Schmid
AAML
26
7
0
03 Jan 2022
Now You See Me (CME): Concept-based Model Extraction
Now You See Me (CME): Concept-based Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lio
Adrian Weller
46
75
0
25 Oct 2020
TIDE: A General Toolbox for Identifying Object Detection Errors
TIDE: A General Toolbox for Identifying Object Detection Errors
Daniel Bolya
Sean Foley
James Hays
Judy Hoffman
76
195
0
18 Aug 2020
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
94
821
0
09 Jul 2020
Invertible Concept-based Explanations for CNN Models with Non-negative
  Concept Activation Vectors
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
Ruihan Zhang
Prashan Madumal
Tim Miller
Krista A. Ehinger
Benjamin I. P. Rubinstein
FAtt
53
104
0
27 Jun 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
116
6,254
0
22 Oct 2019
Semantics for Global and Local Interpretation of Deep Neural Networks
Semantics for Global and Local Interpretation of Deep Neural Networks
Jindong Gu
Volker Tresp
AI4CE
59
14
0
21 Oct 2019
Searching for MobileNetV3
Searching for MobileNetV3
Andrew G. Howard
Mark Sandler
Grace Chu
Liang-Chieh Chen
Bo Chen
...
Yukun Zhu
Ruoming Pang
Vijay Vasudevan
Quoc V. Le
Hartwig Adam
340
6,772
0
06 May 2019
Understanding Neural Networks via Feature Visualization: A survey
Understanding Neural Networks via Feature Visualization: A survey
Anh Nguyen
J. Yosinski
Jeff Clune
FAtt
63
161
0
18 Apr 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
89
1,012
0
26 Feb 2019
YOLOv3: An Incremental Improvement
YOLOv3: An Incremental Improvement
Joseph Redmon
Ali Farhadi
ObjD
109
21,442
0
08 Apr 2018
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters
  in Deep Neural Networks
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth C. Fong
Andrea Vedaldi
FAtt
68
264
0
10 Jan 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
211
1,838
0
30 Nov 2017
Interpreting CNN Knowledge via an Explanatory Graph
Interpreting CNN Knowledge via an Explanatory Graph
Quanshi Zhang
Ruiming Cao
Feng Shi
Ying Nian Wu
Song-Chun Zhu
FAtt
GNN
SSL
56
242
0
05 Aug 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
289
19,981
0
07 Oct 2016
European Union regulations on algorithmic decision-making and a "right
  to explanation"
European Union regulations on algorithmic decision-making and a "right to explanation"
B. Goodman
Seth Flaxman
FaML
AILaw
63
1,899
0
28 Jun 2016
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
84
838
0
16 Jun 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
250
9,308
0
14 Dec 2015
1