ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.00937
  4. Cited By
LFI-CAM: Learning Feature Importance for Better Visual Explanation

LFI-CAM: Learning Feature Importance for Better Visual Explanation

3 May 2021
Kwang Hee Lee
Chaewon Park
J. Oh
Nojun Kwak
    FAtt
ArXivPDFHTML

Papers citing "LFI-CAM: Learning Feature Importance for Better Visual Explanation"

6 / 6 papers shown
Title
Overview of Class Activation Maps for Visualization Explainability
Overview of Class Activation Maps for Visualization Explainability
Anh Pham Thi Minh
HAI
FAtt
36
5
0
25 Sep 2023
Towards a Praxis for Intercultural Ethics in Explainable AI
Towards a Praxis for Intercultural Ethics in Explainable AI
Chinasa T. Okolo
39
3
0
24 Apr 2023
Object-ABN: Learning to Generate Sharp Attention Maps for Action
  Recognition
Object-ABN: Learning to Generate Sharp Attention Maps for Action Recognition
Tomoya Nitta
Tsubasa Hirakawa
H. Fujiyoshi
Toru Tamaki
58
0
0
27 Jul 2022
Editing Out-of-domain GAN Inversion via Differential Activations
Editing Out-of-domain GAN Inversion via Differential Activations
Haorui Song
Yong Du
Tianyi Xiang
Junyu Dong
Jing Qin
Shengfeng He
DiffM
29
16
0
17 Jul 2022
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,220
0
16 Nov 2016
1