ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.05041
  4. Cited By
Understanding the Role of Individual Units in a Deep Neural Network

Understanding the Role of Individual Units in a Deep Neural Network

10 September 2020
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
    GAN
ArXivPDFHTML

Papers citing "Understanding the Role of Individual Units in a Deep Neural Network"

31 / 81 papers shown
Title
Analyzing the Effects of Handling Data Imbalance on Learned Features
  from Medical Images by Looking Into the Models
Analyzing the Effects of Handling Data Imbalance on Learned Features from Medical Images by Looking Into the Models
Ashkan Khakzar
Yawei Li
Yang Zhang
Mirac Sanisoglu
Seong Tae Kim
Mina Rezaei
Bernd Bischl
Nassir Navab
35
0
0
04 Apr 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
63
7
0
07 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip Torr
FAtt
53
15
0
23 Jan 2022
A Latent-Variable Model for Intrinsic Probing
A Latent-Variable Model for Intrinsic Probing
Karolina Stañczak
Lucas Torroba Hennigen
Adina Williams
Ryan Cotterell
Isabelle Augenstein
29
4
0
20 Jan 2022
Interpretable Low-Resource Legal Decision Making
Interpretable Low-Resource Legal Decision Making
R. Bhambhoria
Hui Liu
Samuel Dahan
Xiao-Dan Zhu
ELM
AILaw
37
9
0
01 Jan 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN
  Interpretability
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability
Sílvia Casacuberta
Esra Suel
Seth Flaxman
FAtt
21
1
0
31 Dec 2021
Explainable Artificial Intelligence Methods in Combating Pandemics: A
  Systematic Review
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
F. Giuste
Wenqi Shi
Yuanda Zhu
Tarun Naren
Monica Isgut
Ying Sha
L. Tong
Mitali S. Gupte
May D. Wang
33
73
0
23 Dec 2021
Ensembling Off-the-shelf Models for GAN Training
Ensembling Off-the-shelf Models for GAN Training
Nupur Kumari
Richard Y. Zhang
Eli Shechtman
Jun-Yan Zhu
34
86
0
16 Dec 2021
GAM Changer: Editing Generalized Additive Models with Interactive
  Visualization
GAM Changer: Editing Generalized Additive Models with Interactive Visualization
Zijie J. Wang
Alex Kale
Harsha Nori
P. Stella
M. Nunnally
Duen Horng Chau
Mihaela Vorvoreanu
Jennifer Wortman Vaughan
R. Caruana
KELM
19
24
0
06 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Editing a classifier by rewriting its prediction rules
Editing a classifier by rewriting its prediction rules
Shibani Santurkar
Dimitris Tsipras
Mahalaxmi Elango
David Bau
Antonio Torralba
A. Madry
KELM
186
89
0
02 Dec 2021
HyperInverter: Improving StyleGAN Inversion via Hypernetwork
HyperInverter: Improving StyleGAN Inversion via Hypernetwork
Tan M. Dinh
Anh Tran
Rang Nguyen
Binh-Son Hua
38
116
0
01 Dec 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated
  Channel Maps
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
43
17
0
09 Nov 2021
Task Guided Compositional Representation Learning for ZDA
Task Guided Compositional Representation Learning for ZDA
Shuang Liu
Mete Ozay
OOD
27
0
0
13 Sep 2021
IFBiD: Inference-Free Bias Detection
IFBiD: Inference-Free Bias Detection
Ignacio Serna
Daniel DeAlcala
Aythami Morales
Julian Fierrez
J. Ortega-Garcia
CVBM
39
11
0
09 Sep 2021
NeuroCartography: Scalable Automatic Visual Summarization of Concepts in
  Deep Neural Networks
NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks
Haekyu Park
Nilaksh Das
Rahul Duggal
Austin P. Wright
Omar Shaikh
Fred Hohman
Duen Horng Chau
HAI
29
25
0
29 Aug 2021
Interpreting Face Inference Models using Hierarchical Network Dissection
Interpreting Face Inference Models using Hierarchical Network Dissection
Divyang Teotia
Àgata Lapedriza
Sarah Ostadabbas
CVBM
34
3
0
23 Aug 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
A. Madry
FAtt
22
88
0
11 May 2021
Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems
Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems
Daniel Buschek
Lukas Mecke
Florian Lehmann
Hai Dang
13
42
0
01 Apr 2021
Neuron Coverage-Guided Domain Generalization
Neuron Coverage-Guided Domain Generalization
Chris Xing Tian
Haoliang Li
Xiaofei Xie
Yang Liu
Shiqi Wang
23
35
0
27 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
45
25
0
17 Feb 2021
Understanding Failures of Deep Networks via Robust Feature Extraction
Understanding Failures of Deep Networks via Robust Feature Extraction
Sahil Singla
Besmira Nushi
S. Shah
Ece Kamar
Eric Horvitz
FAtt
28
83
0
03 Dec 2020
FACEGAN: Facial Attribute Controllable rEenactment GAN
FACEGAN: Facial Attribute Controllable rEenactment GAN
S. Tripathy
Arno Solin
Esa Rahtu
CVBM
29
42
0
09 Nov 2020
Role Taxonomy of Units in Deep Neural Networks
Role Taxonomy of Units in Deep Neural Networks
Yang Zhao
Hao Zhang
Xiuyuan Hu
18
1
0
02 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
47
7
0
23 Oct 2020
Meta-trained agents implement Bayes-optimal agents
Meta-trained agents implement Bayes-optimal agents
Vladimir Mikulik
Grégoire Delétang
Tom McGrath
Tim Genewein
Miljan Martic
Shane Legg
Pedro A. Ortega
OOD
FedML
35
41
0
21 Oct 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
19
27
0
07 Sep 2020
SensitiveLoss: Improving Accuracy and Fairness of Face Representations
  with Discrimination-Aware Deep Learning
SensitiveLoss: Improving Accuracy and Fairness of Face Representations with Discrimination-Aware Deep Learning
Ignacio Serna
Aythami Morales
Julian Fierrez
Manuel Cebrian
Nick Obradovich
Iyad Rahwan
FaML
CVBM
30
73
0
22 Apr 2020
Selectivity considered harmful: evaluating the causal impact of class
  selectivity in DNNs
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
58
33
0
03 Mar 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
43
301
0
08 Jan 2020
Revisiting the Importance of Individual Units in CNNs via Ablation
Revisiting the Importance of Individual Units in CNNs via Ablation
Bolei Zhou
Yiyou Sun
David Bau
Antonio Torralba
FAtt
59
115
0
07 Jun 2018
Previous
12