Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.03429
Cited By
ERASER: A Benchmark to Evaluate Rationalized NLP Models
8 November 2019
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ERASER: A Benchmark to Evaluate Rationalized NLP Models"
50 / 138 papers shown
Title
MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure
Yinya Huang
Hongming Zhang
Ruixin Hong
Xiaodan Liang
Changshui Zhang
Dong Yu
LRM
62
7
0
22 Oct 2022
StyLEx: Explaining Style Using Human Lexical Annotations
Shirley Anugrah Hayati
Kyumin Park
Dheeraj Rajagopal
Lyle Ungar
Dongyeop Kang
28
3
0
14 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He
Yu-Xiang Wang
Julian McAuley
Bodhisattwa Prasad Majumder
22
19
0
14 Oct 2022
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
29
82
0
13 Oct 2022
CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation
Tanay Dixit
Bhargavi Paranjape
Hannaneh Hajishirzi
Luke Zettlemoyer
SyDa
146
23
0
10 Oct 2022
Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model
Jacob Eisenstein
D. Andor
Bernd Bohnet
Michael Collins
David M. Mimno
LRM
191
24
0
05 Oct 2022
WildQA: In-the-Wild Video Question Answering
Santiago Castro
Naihao Deng
Pingxuan Huang
Mihai Burzo
Rada Mihalcea
70
7
0
14 Sep 2022
ferret: a Framework for Benchmarking Explainers on Transformers
Giuseppe Attanasio
Eliana Pastor
C. Bonaventura
Debora Nozza
33
30
0
02 Aug 2022
An Interpretability Evaluation Benchmark for Pre-trained Language Models
Ya-Ming Shen
Lijie Wang
Ying Chen
Xinyan Xiao
Jing Liu
Hua-Hong Wu
37
4
0
28 Jul 2022
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales
Aaron Chan
Shaoliang Nie
Liang Tan
Xiaochang Peng
Hamed Firooz
Maziar Sanjabi
Xiang Ren
47
9
0
02 Jul 2022
The NLP Sandbox: an efficient model-to-data system to enable federated and unbiased evaluation of clinical NLP models
Yao Yan
Thomas Yu
Kathleen Muenzen
Sijia Liu
Connor Boyle
...
Bradley Taylor
James A Eddy
J. Guinney
S. Mooney
T. Schaffter
MedIm
36
0
0
28 Jun 2022
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
Mandeep Rathee
Thorben Funke
Avishek Anand
Megha Khosla
44
14
0
28 Jun 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
35
16
0
13 Jun 2022
Resolving the Human Subjects Status of Machine Learning's Crowdworkers
Divyansh Kaushik
Zachary Chase Lipton
A. London
25
2
0
08 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
21
36
0
08 Jun 2022
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
22
8
0
25 May 2022
Learning to Ignore Adversarial Attacks
Yiming Zhang
Yan Zhou
Samuel Carton
Chenhao Tan
51
2
0
23 May 2022
KOLD: Korean Offensive Language Dataset
Young-kuk Jeong
Juhyun Oh
Jaimeen Ahn
Jongwon Lee
Jihyung Mon
Sungjoon Park
Alice H. Oh
57
25
0
23 May 2022
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
Lijie Wang
Yaozong Shen
Shu-ping Peng
Shuai Zhang
Xinyan Xiao
Hao Liu
Hongxuan Tang
Ying Chen
Hua-Hong Wu
Haifeng Wang
ELM
19
21
0
23 May 2022
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Esma Balkir
I. Nejadgholi
Kathleen C. Fraser
S. Kiritchenko
FAtt
35
27
0
06 May 2022
Emp-RFT: Empathetic Response Generation via Recognizing Feature Transitions between Utterances
Wongyung Kim
Youbin Ahn
Donghyun Kim
Kyong-Ho Lee
42
16
0
06 May 2022
METGEN: A Module-Based Entailment Tree Generation Framework for Answer Explanation
Ruixin Hong
Hongming Zhang
Xintong Yu
Changshui Zhang
ReLM
LRM
32
32
0
05 May 2022
CAVES: A Dataset to facilitate Explainable Classification and Summarization of Concerns towards COVID Vaccines
Soham Poddar
Azlaan Mustafa Samad
Rajdeep Mukherjee
Niloy Ganguly
Saptarshi Ghosh
17
26
0
28 Apr 2022
Can Rationalization Improve Robustness?
Howard Chen
Jacqueline He
Karthik Narasimhan
Danqi Chen
AAML
23
40
0
25 Apr 2022
Learning to Scaffold: Optimizing Model Explanations for Teaching
Patrick Fernandes
Marcos Vinícius Treviso
Danish Pruthi
André F. T. Martins
Graham Neubig
FAtt
25
22
0
22 Apr 2022
Re-Examining Human Annotations for Interpretable NLP
Cheng-Han Chiang
Hung-yi Lee
FAtt
XAI
17
6
0
10 Apr 2022
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning
Jiangjie Chen
Rui Xu
Ziquan Fu
Wei Shi
Zhongqiao Li
Xinbo Zhang
Changzhi Sun
Lei Li
Yanghua Xiao
Hao Zhou
ELM
23
35
0
16 Mar 2022
Measuring the Mixing of Contextual Information in the Transformer
Javier Ferrando
Gerard I. Gállego
Marta R. Costa-jussá
26
49
0
08 Mar 2022
DermX: an end-to-end framework for explainable automated dermatological diagnosis
Raluca Jalaboi
F. Faye
Mauricio Orbes-Arteaga
D. Jørgensen
Ole Winther
A. Galimzianova
MedIm
16
17
0
14 Feb 2022
RerrFact: Reduced Evidence Retrieval Representations for Scientific Claim Verification
Ashish Rana
Deepanshu Khanna
Tirthankar Ghosal
Muskaan Singh
Harpreet Singh
P. Rana
12
5
0
05 Feb 2022
Making a (Counterfactual) Difference One Rationale at a Time
Michael J. Plyler
Michal Green
Min Chi
21
11
0
13 Jan 2022
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
40
38
0
17 Dec 2021
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan
Maziar Sanjabi
Lambert Mathias
L Tan
Shaoliang Nie
Xiaochang Peng
Xiang Ren
Hamed Firooz
41
41
0
16 Dec 2021
Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
Shailza Jolly
Pepa Atanasova
Isabelle Augenstein
36
13
0
13 Dec 2021
What to Learn, and How: Toward Effective Learning from Rationales
Samuel Carton
Surya Kanoria
Chenhao Tan
43
22
0
30 Nov 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
80
0
29 Nov 2021
Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction
Kosuke Nishida
Kyosuke Nishida
Itsumi Saito
Sen Yoshida
38
7
0
17 Nov 2021
Understanding Interlocking Dynamics of Cooperative Rationalization
Mo Yu
Yang Zhang
Shiyu Chang
Tommi Jaakkola
20
41
0
26 Oct 2021
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
17
44
0
20 Oct 2021
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
35
16
0
14 Oct 2021
The Eval4NLP Shared Task on Explainable Quality Estimation: Overview and Results
M. Fomicheva
Piyawat Lertvittayakumjorn
Wei-Ye Zhao
Steffen Eger
Yang Gao
ELM
24
39
0
08 Oct 2021
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
355
0
04 Oct 2021
Decision-Focused Summarization
Chao-Chun Hsu
Chenhao Tan
39
16
0
14 Sep 2021
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension
Naoya Inoue
H. Trivedi
Steven K. Sinha
Niranjan Balasubramanian
Kentaro Inui
58
14
0
14 Sep 2021
AutoTriggER: Label-Efficient and Robust Named Entity Recognition with Auxiliary Trigger Extraction
Dong-Ho Lee
Ravi Kiran Selvam
Sheikh Muhammad Sarwar
Bill Yuchen Lin
Fred Morstatter
Jay Pujara
Elizabeth Boschee
James Allan
Xiang Ren
31
2
0
10 Sep 2021
Diagnostics-Guided Explanation Generation
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
LRM
FAtt
38
6
0
08 Sep 2021
Counterfactual Evaluation for Explainable AI
Yingqiang Ge
Shuchang Liu
Zelong Li
Shuyuan Xu
Shijie Geng
Yunqi Li
Juntao Tan
Fei Sun
Yongfeng Zhang
CML
38
13
0
05 Sep 2021
Combining Transformers with Natural Language Explanations
Federico Ruggeri
Marco Lippi
Paolo Torroni
25
1
0
02 Sep 2021
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
32
16
0
31 Aug 2021
Previous
1
2
3
Next