ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.09251
  4. Cited By
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

19 September 2019
Eric Wallace
Jens Tuyls
Junlin Wang
Sanjay Subramanian
Matt Gardner
Sameer Singh
    MILM
ArXivPDFHTML

Papers citing "AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models"

40 / 40 papers shown
Title
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
42
13
0
27 Jul 2024
Explaining Text Similarity in Transformer Models
Explaining Text Similarity in Transformer Models
Alexandros Vasileiou
Oliver Eberle
43
7
0
10 May 2024
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale
  Annotations
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations
Stephanie Brandl
Oliver Eberle
Tiago F. R. Ribeiro
Anders Søgaard
Nora Hollenstein
40
1
0
29 Feb 2024
Explaining Math Word Problem Solvers
Explaining Math Word Problem Solvers
Abby Newcomb
Jugal Kalita
18
1
0
24 Jul 2023
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly
  Generating Predictions and Natural Language Explanations
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations
Jesus Solano
Oana-Maria Camburu
Pasquale Minervini
20
1
0
22 May 2023
IFAN: An Explainability-Focused Interaction Framework for Humans and NLP
  Models
IFAN: An Explainability-Focused Interaction Framework for Humans and NLP Models
Edoardo Mosca
Daryna Dementieva
Tohid Ebrahim Ajdari
Maximilian Kummeth
Kirill Gringauz
Yutong Zhou
Georg Groh
24
8
0
06 Mar 2023
Tell Model Where to Attend: Improving Interpretability of Aspect-Based
  Sentiment Classification via Small Explanation Annotations
Tell Model Where to Attend: Improving Interpretability of Aspect-Based Sentiment Classification via Small Explanation Annotations
Zhenxiao Cheng
Jie Zhou
Wen Wu
Qin Chen
Liang He
32
3
0
21 Feb 2023
Azimuth: Systematic Error Analysis for Text Classification
Azimuth: Systematic Error Analysis for Text Classification
Gabrielle Gauthier Melançon
Orlando Marquez Ayala
Lindsay D. Brin
Chris Tyler
Frederic Branchaud-Charron
Joseph Marinier
Karine Grande
Dieu-Thu Le
16
3
0
16 Dec 2022
How Long Is Enough? Exploring the Optimal Intervals of Long-Range
  Clinical Note Language Modeling
How Long Is Enough? Exploring the Optimal Intervals of Long-Range Clinical Note Language Modeling
Samuel Cahyawijaya
Bryan Wilie
Holy Lovenia
Huang Zhong
Mingqian Zhong
Yuk-Yu Nancy Ip
Pascale Fung
LM&MA
25
2
0
25 Oct 2022
Universal and Independent: Multilingual Probing Framework for Exhaustive
  Model Interpretation and Evaluation
Universal and Independent: Multilingual Probing Framework for Exhaustive Model Interpretation and Evaluation
O. Serikov
Vitaly Protasov
E. Voloshina
V. Knyazkova
Tatiana Shavrina
35
3
0
24 Oct 2022
What the DAAM: Interpreting Stable Diffusion Using Cross Attention
What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Raphael Tang
Linqing Liu
Akshat Pandey
Zhiying Jiang
Gefei Yang
K. Kumar
Pontus Stenetorp
Jimmy J. Lin
Ferhan Ture
34
167
0
10 Oct 2022
PainPoints: A Framework for Language-based Detection of Chronic Pain and
  Expert-Collaborative Text-Summarization
PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization
S. Fadnavis
Amit Dhurandhar
R. Norel
Jenna M. Reinen
C. Agurto
E. Secchettin
V. Schweiger
Giovanni Perini
Guillermo Cecchi
34
1
0
14 Sep 2022
Interpreting BERT-based Text Similarity via Activation and Saliency Maps
Interpreting BERT-based Text Similarity via Activation and Saliency Maps
Itzik Malkiel
Dvir Ginzburg
Oren Barkan
Avi Caciularu
Jonathan Weill
Noam Koenigstein
36
20
0
13 Aug 2022
ferret: a Framework for Benchmarking Explainers on Transformers
ferret: a Framework for Benchmarking Explainers on Transformers
Giuseppe Attanasio
Eliana Pastor
C. Bonaventura
Debora Nozza
33
30
0
02 Aug 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
43
16
0
13 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
21
36
0
08 Jun 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
76
8
0
18 May 2022
LM-Debugger: An Interactive Tool for Inspection and Intervention in
  Transformer-Based Language Models
LM-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models
Mor Geva
Avi Caciularu
Guy Dar
Paul Roit
Shoval Sadde
Micah Shlain
Bar Tamir
Yoav Goldberg
KELM
35
27
0
26 Apr 2022
It Takes Two Flints to Make a Fire: Multitask Learning of Neural
  Relation and Explanation Classifiers
It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers
Zheng Tang
Mihai Surdeanu
27
6
0
25 Apr 2022
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
Oren Barkan
Edan Hauon
Avi Caciularu
Ori Katz
Itzik Malkiel
Omri Armstrong
Noam Koenigstein
34
37
0
23 Apr 2022
First is Better Than Last for Language Data Influence
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
34
20
0
24 Feb 2022
Interpreting Language Models with Contrastive Explanations
Interpreting Language Models with Contrastive Explanations
Kayo Yin
Graham Neubig
MILM
23
78
0
21 Feb 2022
XAI for Transformers: Better Explanations through Conservative
  Propagation
XAI for Transformers: Better Explanations through Conservative Propagation
Ameen Ali
Thomas Schnake
Oliver Eberle
G. Montavon
Klaus-Robert Muller
Lior Wolf
FAtt
15
89
0
15 Feb 2022
Measure and Improve Robustness in NLP Models: A Survey
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
139
130
0
15 Dec 2021
LMdiff: A Visual Diff Tool to Compare Language Models
LMdiff: A Visual Diff Tool to Compare Language Models
Hendrik Strobelt
Benjamin Hoover
Arvind Satyanarayan
Sebastian Gehrmann
VLM
37
19
0
02 Nov 2021
Interpreting Deep Learning Models in Natural Language Processing: A
  Review
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
19
44
0
20 Oct 2021
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
MoE
29
117
0
05 Oct 2021
T3-Vis: a visual analytic framework for Training and fine-Tuning
  Transformers in NLP
T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Raymond Li
Wen Xiao
Lanjun Wang
Hyeju Jang
Giuseppe Carenini
ViT
31
23
0
31 Aug 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
11
21
0
08 Jun 2021
Making Attention Mechanisms More Robust and Interpretable with Virtual
  Adversarial Training
Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training
Shunsuke Kitada
Hitoshi Iyatomi
AAML
28
8
0
18 Apr 2021
ExplainaBoard: An Explainable Leaderboard for NLP
ExplainaBoard: An Explainable Leaderboard for NLP
Pengfei Liu
Jinlan Fu
Yanghua Xiao
Weizhe Yuan
Shuaichen Chang
Junqi Dai
Yixin Liu
Zihuiwen Ye
Zi-Yi Dou
Graham Neubig
XAI
LRM
ELM
28
54
0
13 Apr 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Joey Tianyi Zhou
Caiming Xiong
TDI
41
102
0
31 Dec 2020
Generating Plausible Counterfactual Explanations for Deep Transformers
  in Financial Text Classification
Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification
Linyi Yang
Eoin M. Kenny
T. L. J. Ng
Yi Yang
Barry Smyth
Ruihai Dong
15
70
0
23 Oct 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
27
25
0
07 Oct 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
42
583
0
26 Jun 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through
  Influence Functions
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
23
165
0
14 May 2020
RICA: Evaluating Robust Inference Capabilities Based on Commonsense
  Axioms
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms
Pei Zhou
Rahul Khanna
Seyeon Lee
Bill Yuchen Lin
Daniel E. Ho
Jay Pujara
Xiang Ren
ReLM
21
36
0
02 May 2020
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and
  Adversarial Training in NLP
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP
John X. Morris
Eli Lifland
Jin Yong Yoo
J. E. Grigsby
Di Jin
Yanjun Qi
SILM
27
69
0
29 Apr 2020
Reevaluating Adversarial Examples in Natural Language
Reevaluating Adversarial Examples in Natural Language
John X. Morris
Eli Lifland
Jack Lanchantin
Yangfeng Ji
Yanjun Qi
SILM
AAML
20
111
0
25 Apr 2020
CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation
CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation
Dustin L. Arendt
Zhuanyi Huang
Prasha Shrestha
Ellyn Ayton
M. Glenski
Svitlana Volkova
27
8
0
16 Apr 2020
1