Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.00115
Cited By
Learning to Faithfully Rationalize by Construction
30 April 2020
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning to Faithfully Rationalize by Construction"
50 / 50 papers shown
Title
Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets
Wei Liu
Zhongyu Niu
Lang Gao
Zhiying Deng
Jun Wang
Haozhao Wang
Ruixuan Li
185
1
0
04 May 2025
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
42
10
0
27 Jul 2024
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
50
1
0
03 Apr 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
56
7
0
20 Dec 2023
Enhancing the Rationale-Input Alignment for Self-explaining Rationalization
Wei Liu
Yining Qi
Jun Wang
Zhiying Deng
Yuankai Zhang
Chengwei Wang
Ruixuan Li
38
9
0
07 Dec 2023
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint
Wei Liu
Jun Wang
Yining Qi
Rui Li
Yang Qiu
Yuankai Zhang
Jie Han
Yixiong Zou
47
12
0
23 May 2023
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification
Jiasheng Si
Yingjie Zhu
Deyu Zhou
AAML
52
3
0
16 May 2023
MGR: Multi-generator Based Rationalization
Wei Liu
Yining Qi
Jun Wang
Rui Li
Xinyang Li
Yuankai Zhang
Yang Qiu
23
7
0
08 May 2023
Finding the Needle in a Haystack: Unsupervised Rationale Extraction from Long Text Classifiers
Kamil Bujel
Andrew Caines
H. Yannakoudakis
Marek Rei
AI4TS
19
1
0
14 Mar 2023
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
Exploring Faithful Rationale for Multi-hop Fact Verification via Salience-Aware Graph Learning
Jiasheng Si
Yingjie Zhu
Deyu Zhou
37
12
0
02 Dec 2022
SOLD: Sinhala Offensive Language Dataset
Tharindu Ranasinghe
Isuri Anuradha
Damith Premasiri
Kanishka Silva
Hansi Hettiarachchi
Lasitha Uyangodage
Marcos Zampieri
41
8
0
01 Dec 2022
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
24
0
0
15 Nov 2022
Calibration Meets Explanation: A Simple and Effective Approach for Model Confidence Estimates
Dongfang Li
Baotian Hu
Qingcai Chen
16
8
0
06 Nov 2022
Robustifying Sentiment Classification by Maximally Exploiting Few Counterfactuals
Maarten De Raedt
Fréderic Godin
Chris Develder
Thomas Demeester
13
1
0
21 Oct 2022
StyLEx: Explaining Style Using Human Lexical Annotations
Shirley Anugrah Hayati
Kyumin Park
Dheeraj Rajagopal
Lyle Ungar
Dongyeop Kang
28
3
0
14 Oct 2022
Explanations from Large Language Models Make Small Reasoners Better
Shiyang Li
Jianshu Chen
Yelong Shen
Zhiyu Zoey Chen
Xinlu Zhang
...
Jingu Qian
Baolin Peng
Yi Mao
Wenhu Chen
Xifeng Yan
ReLM
LRM
43
129
0
13 Oct 2022
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
27
6
0
06 Jun 2022
Argumentative Explanations for Pattern-Based Text Classifiers
Piyawat Lertvittayakumjorn
Francesca Toni
45
4
0
22 May 2022
Can Rationalization Improve Robustness?
Howard Chen
Jacqueline He
Karthik R. Narasimhan
Danqi Chen
AAML
29
40
0
25 Apr 2022
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
Oren Barkan
Edan Hauon
Avi Caciularu
Ori Katz
Itzik Malkiel
Omri Armstrong
Noam Koenigstein
34
37
0
23 Apr 2022
ProtoTEx: Explaining Model Decisions with Prototype Tensors
Anubrata Das
Chitrank Gupta
Venelin Kovatchev
Matthew Lease
Junjie Li
31
26
0
11 Apr 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
AAML
ELM
30
20
0
21 Mar 2022
Making a (Counterfactual) Difference One Rationale at a Time
Michael J. Plyler
Michal Green
Min Chi
21
11
0
13 Jan 2022
Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction
Dongfang Li
Baotian Hu
Qingcai Chen
Tujie Xu
Jingcong Tao
Yunan Zhang
32
12
0
20 Dec 2021
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan
Maziar Sanjabi
Lambert Mathias
L Tan
Shaoliang Nie
Xiaochang Peng
Xiang Ren
Hamed Firooz
41
41
0
16 Dec 2021
What to Learn, and How: Toward Effective Learning from Rationales
Samuel Carton
Surya Kanoria
Chenhao Tan
45
22
0
30 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
38
23
0
09 Nov 2021
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
19
44
0
20 Oct 2021
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
35
16
0
14 Oct 2021
The Eval4NLP Shared Task on Explainable Quality Estimation: Overview and Results
M. Fomicheva
Piyawat Lertvittayakumjorn
Wei-Ye Zhao
Steffen Eger
Yang Gao
ELM
24
39
0
08 Oct 2021
Decision-Focused Summarization
Chao-Chun Hsu
Chenhao Tan
39
16
0
14 Sep 2021
AutoTriggER: Label-Efficient and Robust Named Entity Recognition with Auxiliary Trigger Extraction
Dong-Ho Lee
Ravi Kiran Selvam
Sheikh Muhammad Sarwar
Bill Yuchen Lin
Fred Morstatter
Jay Pujara
Elizabeth Boschee
James Allan
Xiang Ren
31
2
0
10 Sep 2021
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
32
16
0
31 Aug 2021
DuTrust: A Sentiment Analysis Dataset for Trustworthiness Evaluation
Lijie Wang
Hao Liu
Shu-ping Peng
Hongxuan Tang
Xinyan Xiao
Ying-Cong Chen
Hua Wu
Haifeng Wang
25
5
0
30 Aug 2021
Are Training Resources Insufficient? Predict First Then Explain!
Myeongjun Jang
Thomas Lukasiewicz
LRM
21
7
0
29 Aug 2021
Translation Error Detection as Rationale Extraction
M. Fomicheva
Lucia Specia
Nikolaos Aletras
21
23
0
27 Aug 2021
ProoFVer: Natural Logic Theorem Proving for Fact Verification
Amrith Krishna
Sebastian Riedel
Andreas Vlachos
21
62
0
25 Aug 2021
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
27
19
0
11 May 2021
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
24
37
0
06 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
33
132
0
27 Apr 2021
Flexible Instance-Specific Rationalization of NLP Models
G. Chrysostomou
Nikolaos Aletras
31
14
0
16 Apr 2021
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
33
119
0
27 Dec 2020
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
Faeze Brahman
Vered Shwartz
Rachel Rudinger
Yejin Choi
LRM
15
42
0
14 Dec 2020
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
46
38
0
03 Dec 2020
Multi-document Summarization via Deep Learning Techniques: A Survey
Congbo Ma
W. Zhang
Mingyu Guo
Hu Wang
Quan Z. Sheng
13
126
0
10 Nov 2020
Weakly- and Semi-supervised Evidence Extraction
Danish Pruthi
Bhuwan Dhingra
Graham Neubig
Zachary Chase Lipton
4
23
0
03 Nov 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
27
25
0
07 Oct 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
23
164
0
14 May 2020
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1