Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.13984
Cited By
Interpretation of NLP models through input marginalization
27 October 2020
Siwon Kim
Jihun Yi
Eunji Kim
Sungroh Yoon
MILM
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpretation of NLP models through input marginalization"
18 / 18 papers shown
Title
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
55
0
0
21 Aug 2024
InfFeed: Influence Functions as a Feedback to Improve the Performance of Subjective Tasks
Somnath Banerjee
Maulindu Sarkar
Punyajoy Saha
Binny Mathew
Animesh Mukherjee
TDI
36
0
0
22 Feb 2024
Interpreting Sentiment Composition with Latent Semantic Tree
Zhongtao Jiang
Yuanzhe Zhang
Cao Liu
Jiansong Chen
Jun Zhao
Kang Liu
CoGe
31
0
0
31 Aug 2023
A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification
Xiang Hu
Xinyu Kong
Kewei Tu
MILM
BDL
34
5
0
06 Mar 2023
BMX: Boosting Natural Language Generation Metrics with Explainability
Christoph Leiter
Hoang-Quan Nguyen
Steffen Eger
ELM
24
0
0
20 Dec 2022
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
24
8
0
25 May 2022
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
76
8
0
18 May 2022
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Esma Balkir
I. Nejadgholi
Kathleen C. Fraser
S. Kiritchenko
FAtt
41
27
0
06 May 2022
Learning to Scaffold: Optimizing Model Explanations for Teaching
Patrick Fernandes
Marcos Vinícius Treviso
Danish Pruthi
André F. T. Martins
Graham Neubig
FAtt
30
22
0
22 Apr 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei Zhao
Yang Gao
Steffen Eger
AAML
ELM
30
20
0
21 Mar 2022
Deconfounding to Explanation Evaluation in Graph Neural Networks
Yingmin Wu
Xiang Wang
An Zhang
Xia Hu
Fuli Feng
Xiangnan He
Tat-Seng Chua
FAtt
CML
17
14
0
21 Jan 2022
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
19
45
0
20 Oct 2021
Jointly Attacking Graph Neural Network and its Explanations
Wenqi Fan
Wei Jin
Xiaorui Liu
Han Xu
Xianfeng Tang
Suhang Wang
Qing Li
Jiliang Tang
Jianping Wang
Charu C. Aggarwal
AAML
42
28
0
07 Aug 2021
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation
Wei Zhang
Ziming Huang
Yada Zhu
Guangnan Ye
Xiaodong Cui
Fan Zhang
31
17
0
09 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODD
LRM
FAtt
29
91
0
01 Jun 2021
Flexible Instance-Specific Rationalization of NLP Models
G. Chrysostomou
Nikolaos Aletras
31
14
0
16 Apr 2021
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
44
95
0
02 Mar 2021
1