ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.05226
  4. Cited By
Towards Interpretable R-CNN by Unfolding Latent Structures
v1v2 (latest)

Towards Interpretable R-CNN by Unfolding Latent Structures

14 November 2017
Tianfu Wu
Wei Sun
Xilai Li
Xi Song
Yangqiu Song
    ObjD
ArXiv (abs)PDFHTML

Papers citing "Towards Interpretable R-CNN by Unfolding Latent Structures"

12 / 12 papers shown
Title
LAP: An Attention-Based Module for Concept Based Self-Interpretation and
  Knowledge Injection in Convolutional Neural Networks
LAP: An Attention-Based Module for Concept Based Self-Interpretation and Knowledge Injection in Convolutional Neural Networks
Rassa Ghavami Modegh
Ahmadali Salimi
Alireza Dizaji
Hamid R. Rabiee
FAtt
58
0
0
27 Jan 2022
Carrying out CNN Channel Pruning in a White Box
Carrying out CNN Channel Pruning in a White Box
Yuxin Zhang
Mingbao Lin
Chia-Wen Lin
Jie Chen
Feiyue Huang
Yongjian Wu
Yonghong Tian
Rongrong Ji
VLM
109
60
0
24 Apr 2021
Qualitative Investigation in Explainable Artificial Intelligence: A Bit
  More Insight from Social Science
Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science
Adam J. Johs
Denise E. Agosto
Rosina O. Weber
51
6
0
13 Nov 2020
A Comprehensive Survey of Machine Learning Applied to Radar Signal
  Processing
A Comprehensive Survey of Machine Learning Applied to Radar Signal Processing
Ping Lang
Xiongjun Fu
M. Martorella
Jian Dong
Rui Qin
Xianpeng Meng
M. Xie
33
42
0
29 Sep 2020
Explainability in Deep Reinforcement Learning
Explainability in Deep Reinforcement Learning
Alexandre Heuillet
Fabien Couthouis
Natalia Díaz Rodríguez
XAI
224
283
0
15 Aug 2020
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven,
  AI-Enabled Medical Imaging Analysis
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven, AI-Enabled Medical Imaging Analysis
Yao Xie
Melody Chen
David Kao
Ge Gao
Xiang Ánthony' Chen
130
130
0
15 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
94
315
0
08 Jan 2020
Towards a Unified Evaluation of Explanation Methods without Ground Truth
Towards a Unified Evaluation of Explanation Methods without Ground Truth
Hao Zhang
Jiayi Chen
Haotian Xue
Quanshi Zhang
XAI
71
8
0
20 Nov 2019
Variational Saccading: Efficient Inference for Large Resolution Images
Variational Saccading: Efficient Inference for Large Resolution Images
Jason Ramapuram
M. Diephuis
Frantzeska Lavda
Russ Webb
Alexandros Kalousis
78
5
0
08 Dec 2018
Network Transplanting
Network Transplanting
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
OOD
49
5
0
26 Apr 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
113
220
0
20 Mar 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaMLHAI
159
822
0
02 Feb 2018
1