ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.08649
  4. Cited By
Saliency Learning: Teaching the Model Where to Pay Attention

Saliency Learning: Teaching the Model Where to Pay Attention

22 February 2019
Reza Ghaeini
Xiaoli Z. Fern
Hamed Shahbazi
Prasad Tadepalli
    FAtt
    XAI
ArXivPDFHTML

Papers citing "Saliency Learning: Teaching the Model Where to Pay Attention"

11 / 11 papers shown
Title
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
40
10
0
27 Jul 2024
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
43
1
0
23 Jul 2024
Exploring the Trade-off Between Model Performance and Explanation
  Plausibility of Text Classifiers Using Human Rationales
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
47
1
0
03 Apr 2024
SCAAT: Improving Neural Network Interpretability via Saliency
  Constrained Adaptive Adversarial Training
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAtt
AAML
28
2
0
09 Nov 2023
Interpretability-Aware Vision Transformer
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
82
7
0
14 Sep 2023
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
XMD: An End-to-End Framework for Interactive Explanation-Based Debugging
  of NLP Models
XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
Dong-Ho Lee
Akshen Kadakia
Brihi Joshi
Aaron Chan
Ziyi Liu
...
Takashi Shibuya
Ryosuke Mitani
Toshiyuki Sekiya
Jay Pujara
Xiang Ren
LRM
40
9
0
30 Oct 2022
SuMe: A Dataset Towards Summarizing Biomedical Mechanisms
SuMe: A Dataset Towards Summarizing Biomedical Mechanisms
Mohaddeseh Bastan
N. Shankar
Mihai Surdeanu
Niranjan Balasubramanian
21
3
0
10 May 2022
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
80
0
29 Nov 2021
Diagnostics-Guided Explanation Generation
Diagnostics-Guided Explanation Generation
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
LRM
FAtt
38
6
0
08 Sep 2021
Towards Robust Classification Model by Counterfactual and Invariant Data
  Generation
Towards Robust Classification Model by Counterfactual and Invariant Data Generation
C. Chang
George Adam
Anna Goldenberg
OOD
CML
21
31
0
02 Jun 2021
1