ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.02202
  4. Cited By
Towards Self-Explainability of Deep Neural Networks with Heatmap
  Captioning and Large-Language Models

Towards Self-Explainability of Deep Neural Networks with Heatmap Captioning and Large-Language Models

5 April 2023
Osman Tursun
Simon Denman
Sridha Sridharan
Clinton Fookes
    ViT
    VLM
ArXivPDFHTML

Papers citing "Towards Self-Explainability of Deep Neural Networks with Heatmap Captioning and Large-Language Models"

6 / 6 papers shown
Title
Part-based Quantitative Analysis for Heatmaps
Part-based Quantitative Analysis for Heatmaps
Osman Tursun
Sinan Kalkan
Simon Denman
Sridha Sridharan
Clinton Fookes
35
0
0
22 May 2024
Explaining Autonomy: Enhancing Human-Robot Interaction through
  Explanation Generation with Large Language Models
Explaining Autonomy: Enhancing Human-Robot Interaction through Explanation Generation with Large Language Models
David Sobrín-Hidalgo
Miguel Ángel González Santamarta
Ángel Manuel Guerrero Higueras
Francisco J. Rodríguez-Lera
Vicente Matellán Olivera
51
4
0
06 Feb 2024
The Impact of Imperfect XAI on Human-AI Decision-Making
The Impact of Imperfect XAI on Human-AI Decision-Making
Katelyn Morrison
Philipp Spitzer
Violet Turri
Michelle C. Feng
Niklas Kühl
Adam Perer
33
33
0
25 Jul 2023
Saliency Map Verbalization: Comparing Feature Importance Representations
  from Model-free and Instruction-based Methods
Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods
Nils Feldhus
Leonhard Hennig
Maximilian Dustin Nasert
Christopher Ebert
Robert Schwarzenberg
Sebastian Möller
FAtt
21
19
0
13 Oct 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
369
12,003
0
04 Mar 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
1