ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.10577
  4. Cited By
Examining CNN Representations with respect to Dataset Bias

Examining CNN Representations with respect to Dataset Bias

29 October 2017
Quanshi Zhang
Wenguan Wang
Song-Chun Zhu
    SSL
    FAtt
ArXivPDFHTML

Papers citing "Examining CNN Representations with respect to Dataset Bias"

19 / 19 papers shown
Title
Faithful Counterfactual Visual Explanations (FCVE)
Faithful Counterfactual Visual Explanations (FCVE)
Bismillah Khan
Syed Ali Tariq
Tehseen Zia
Muhammad Ahsan
David Windridge
44
0
0
12 Jan 2025
Towards Counterfactual and Contrastive Explainability and Transparency of DCNN Image Classifiers
Towards Counterfactual and Contrastive Explainability and Transparency of DCNN Image Classifiers
Syed Ali Tariq
Tehseen Zia
Mubeen Ghafoor
AAML
62
7
0
12 Jan 2025
Zone Evaluation: Revealing Spatial Bias in Object Detection
Zone Evaluation: Revealing Spatial Bias in Object Detection
Zhaohui Zheng
Yuming Chen
Qibin Hou
Xiang Li
Ping Wang
Ming-Ming Cheng
ObjD
27
3
0
20 Oct 2023
Discovering and Explaining the Non-Causality of Deep Learning in SAR ATR
Discovering and Explaining the Non-Causality of Deep Learning in SAR ATR
Wei-Jang Li
Wei Yang
Li Liu
Wenpeng Zhang
Yong-Jin Liu
26
24
0
03 Apr 2023
Concept Evolution in Deep Learning Training: A Unified Interpretation
  Framework and Discoveries
Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries
Haekyu Park
Seongmin Lee
Benjamin Hoover
Austin P. Wright
Omar Shaikh
Rahul Duggal
Nilaksh Das
Kevin Wenliang Li
Judy Hoffman
Duen Horng Chau
36
2
0
30 Mar 2022
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
34
28
0
25 Mar 2022
IFBiD: Inference-Free Bias Detection
IFBiD: Inference-Free Bias Detection
Ignacio Serna
Daniel DeAlcala
Aythami Morales
Julian Fierrez
J. Ortega-Garcia
CVBM
39
11
0
09 Sep 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
31
31
0
09 Jun 2021
Deep Learning for Political Science
Deep Learning for Political Science
Kakia Chatsiou
Slava Jankin
AI4CE
39
12
0
13 May 2020
Deceptive AI Explanations: Creation and Detection
Deceptive AI Explanations: Creation and Detection
Johannes Schneider
Christian Meske
Michalis Vlachos
29
28
0
21 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
43
301
0
08 Jan 2020
Directing DNNs Attention for Facial Attribution Classification using
  Gradient-weighted Class Activation Mapping
Directing DNNs Attention for Facial Attribution Classification using Gradient-weighted Class Activation Mapping
Xi Yang
Bojian Wu
Issei Sato
Takeo Igarashi
CVBM
22
4
0
02 May 2019
Rectified Decision Trees: Towards Interpretability, Compression and
  Empirical Soundness
Rectified Decision Trees: Towards Interpretability, Compression and Empirical Soundness
Jiawang Bai
Yiming Li
Jiawei Li
Yong Jiang
Shutao Xia
37
15
0
14 Mar 2019
Interpretable CNNs for Object Classification
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
24
54
0
08 Jan 2019
Mining Interpretable AOG Representations from Convolutional Networks via
  Active Question Answering
Mining Interpretable AOG Representations from Convolutional Networks via Active Question Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
19
14
0
18 Dec 2018
Explaining Neural Networks Semantically and Quantitatively
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
23
54
0
18 Dec 2018
Counterfactuals uncover the modular structure of deep generative models
Counterfactuals uncover the modular structure of deep generative models
M. Besserve
Arash Mehrjou
Rémy Sun
Bernhard Schölkopf
DRL
BDL
DiffM
19
107
0
08 Dec 2018
A Multidisciplinary Survey and Framework for Design and Evaluation of
  Explainable AI Systems
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
36
102
0
28 Nov 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
17
810
0
02 Feb 2018
1