ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.04133
  4. Cited By
Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR)
  Approach to Understanding Deep Neural Networks

Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks

13 April 2017
Devinder Kumar
Alexander Wong
Graham W. Taylor
ArXivPDFHTML

Papers citing "Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks"

4 / 4 papers shown
Title
Under the Hood of Neural Networks: Characterizing Learned
  Representations by Functional Neuron Populations and Network Ablations
Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations
Richard Meyes
Constantin Waubert de Puiseau
Andres Felipe Posada-Moreno
Tobias Meisen
AI4CE
30
22
0
02 Apr 2020
Democratisation of Usable Machine Learning in Computer Vision
Democratisation of Usable Machine Learning in Computer Vision
R. Bond
A. Koene
A. Dix
J. Boger
M. Mulvenna
M. Galushka
Brendon Bradley
Fiona Browne
Hui Wang
A. Wong
17
6
0
18 Feb 2019
SISC: End-to-end Interpretable Discovery Radiomics-Driven Lung Cancer
  Prediction via Stacked Interpretable Sequencing Cells
SISC: End-to-end Interpretable Discovery Radiomics-Driven Lung Cancer Prediction via Stacked Interpretable Sequencing Cells
Vignesh Sankar
Devinder Kumar
David A Clausi
Graham W. Taylor
Alexander Wong
11
22
0
15 Jan 2019
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
17
809
0
02 Feb 2018
1