ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.07367
  4. Cited By
"Will You Find These Shortcuts?" A Protocol for Evaluating the
  Faithfulness of Input Salience Methods for Text Classification

"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification

14 November 2021
Jasmijn Bastings
Sebastian Ebert
Polina Zablotskaia
Anders Sandholm
Katja Filippova
ArXivPDFHTML

Papers citing ""Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification"

14 / 14 papers shown
Title
Short-circuiting Shortcuts: Mechanistic Investigation of Shortcuts in Text Classification
Short-circuiting Shortcuts: Mechanistic Investigation of Shortcuts in Text Classification
Leon Eshuijs
Shihan Wang
Antske Fokkens
26
0
0
09 May 2025
How Do Artificial Intelligences Think? The Three Mathematico-Cognitive Factors of Categorical Segmentation Operated by Synthetic Neurons
How Do Artificial Intelligences Think? The Three Mathematico-Cognitive Factors of Categorical Segmentation Operated by Synthetic Neurons
Michael Pichat
William Pogrund
Armanush Gasparian
Paloma Pichat
Samuel Demarchi
Michael Veillet-Guillem
42
3
0
26 Dec 2024
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
37
0
0
21 Aug 2024
Accelerating the Global Aggregation of Local Explanations
Accelerating the Global Aggregation of Local Explanations
Alon Mor
Yonatan Belinkov
B. Kimelfeld
FAtt
27
3
0
13 Dec 2023
Out-of-Distribution Generalization in Text Classification: Past,
  Present, and Future
Out-of-Distribution Generalization in Text Classification: Past, Present, and Future
Linyi Yang
Y. Song
Xuan Ren
Chenyang Lyu
Yidong Wang
Lingqiao Liu
Jindong Wang
Jennifer Foster
Yue Zhang
OOD
32
2
0
23 May 2023
Assessing the Impact of Sequence Length Learning on Classification Tasks
  for Transformer Encoder Models
Assessing the Impact of Sequence Length Learning on Classification Tasks for Transformer Encoder Models
Jean-Thomas Baillargeon
Luc Lamontagne
27
1
0
16 Dec 2022
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency
  Methods
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
13
0
0
15 Nov 2022
Finding Dataset Shortcuts with Grammar Induction
Finding Dataset Shortcuts with Grammar Induction
Dan Friedman
Alexander Wettig
Danqi Chen
25
14
0
20 Oct 2022
Auditing Visualizations: Transparency Methods Struggle to Detect
  Anomalous Behavior
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
31
7
0
27 Jun 2022
Measuring the Mixing of Contextual Information in the Transformer
Measuring the Mixing of Contextual Information in the Transformer
Javier Ferrando
Gerard I. Gállego
Marta R. Costa-jussá
21
48
0
08 Mar 2022
When less is more: Simplifying inputs aids neural network understanding
When less is more: Simplifying inputs aids neural network understanding
R. Schirrmeister
Rosanne Liu
Sara Hooker
T. Ball
21
5
0
14 Jan 2022
Evaluating the Faithfulness of Importance Measures in NLP by Recursively
  Masking Allegedly Important Tokens and Retraining
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
96
35
0
15 Oct 2021
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
237
319
0
21 Aug 2019
Hypothesis Only Baselines in Natural Language Inference
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
190
576
0
02 May 2018
1