ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.08412
  4. Cited By
Evaluating the Faithfulness of Importance Measures in NLP by Recursively
  Masking Allegedly Important Tokens and Retraining

Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining

15 October 2021
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
ArXivPDFHTML

Papers citing "Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining"

10 / 10 papers shown
Title
From Critique to Clarity: A Pathway to Faithful and Personalized Code Explanations with Large Language Models
Zexing Xu
Zhuang Luo
Yichuan Li
Kyumin Lee
S. Rasoul Etesami
38
0
0
28 Jan 2025
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng
Farhad Shirani
Zhuomin Chen
Chaohao Lin
Wei Cheng
Wenbo Guo
Dongsheng Luo
AAML
38
0
0
03 Oct 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Evaluating Saliency Explanations in NLP by Crowdsourcing
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAI
FAtt
LRM
27
1
0
17 May 2024
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency
  Methods
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
21
0
0
15 Nov 2022
Measuring the Mixing of Contextual Information in the Transformer
Measuring the Mixing of Contextual Information in the Transformer
Javier Ferrando
Gerard I. Gállego
Marta R. Costa-jussá
26
49
0
08 Mar 2022
Human Interpretation of Saliency-based Explanation Over Text
Human Interpretation of Saliency-based Explanation Over Text
Hendrik Schuff
Alon Jacovi
Heike Adel
Yoav Goldberg
Ngoc Thang Vu
MILM
XAI
FAtt
144
39
0
27 Jan 2022
When less is more: Simplifying inputs aids neural network understanding
When less is more: Simplifying inputs aids neural network understanding
R. Schirrmeister
Rosanne Liu
Sara Hooker
T. Ball
24
5
0
14 Jan 2022
"Will You Find These Shortcuts?" A Protocol for Evaluating the
  Faithfulness of Input Salience Methods for Text Classification
"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Jasmijn Bastings
Sebastian Ebert
Polina Zablotskaia
Anders Sandholm
Katja Filippova
115
75
0
14 Nov 2021
A Framework for Learning to Request Rich and Contextually Useful
  Information from Humans
A Framework for Learning to Request Rich and Contextually Useful Information from Humans
Khanh Nguyen
Yonatan Bisk
Hal Daumé
47
16
0
14 Oct 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,684
0
28 Feb 2017
1