ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.07982
  4. Cited By
Evaluating the Faithfulness of Saliency-based Explanations for Deep
  Learning Models for Temporal Colour Constancy

Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy

15 November 2022
Matteo Rizzo
Cristina Conati
Daesik Jang
Hui Hu
    FAtt
ArXivPDFHTML

Papers citing "Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy"

6 / 6 papers shown
Title
Attention Mechanisms in Computer Vision: A Survey
Attention Mechanisms in Computer Vision: A Survey
Meng-Hao Guo
Tianhan Xu
Jiangjiang Liu
Zheng-Ning Liu
Peng-Tao Jiang
Tai-Jiang Mu
Song-Hai Zhang
Ralph Robert Martin
Ming-Ming Cheng
Shimin Hu
67
1,655
0
15 Nov 2021
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
28
106
0
01 Jun 2020
Attention is not not Explanation
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAI
AAML
FAtt
26
901
0
13 Aug 2019
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
76
1,307
0
26 Feb 2019
Explainable Prediction of Medical Codes from Clinical Text
Explainable Prediction of Medical Codes from Clinical Text
J. Mullenbach
Sarah Wiegreffe
J. Duke
Jimeng Sun
Jacob Eisenstein
FAtt
44
571
0
15 Feb 2018
Neural Machine Translation by Jointly Learning to Align and Translate
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
305
27,205
0
01 Sep 2014
1