Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.08802
Cited By
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
16 December 2021
Aaron Chan
Maziar Sanjabi
Lambert Mathias
L Tan
Shaoliang Nie
Xiaochang Peng
Xiang Ren
Hamed Firooz
Re-assign community
ArXiv
PDF
HTML
Papers citing
"UNIREX: A Unified Learning Framework for Language Model Rationale Extraction"
33 / 33 papers shown
Title
Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets
W. Liu
Zhongyu Niu
Lang Gao
Zhiying Deng
Jun Wang
H. Wang
Ruixuan Li
134
1
0
04 May 2025
Noiser: Bounded Input Perturbations for Attributing Large Language Models
Mohammad Reza Ghasemi Madani
Aryo Pradipta Gema
Gabriele Sarti
Yu Zhao
Pasquale Minervini
Andrea Passerini
AAML
30
0
0
03 Apr 2025
Faithfulness of LLM Self-Explanations for Commonsense Tasks: Larger Is Better, and Instruction-Tuning Allows Trade-Offs but Not Pareto Dominance
Noah Y. Siegel
N. Heess
Maria Perez-Ortiz
Oana-Maria Camburu
LRM
49
0
0
17 Mar 2025
Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization
W. Liu
Zhiying Deng
Zhongyu Niu
Jun Wang
Haozhao Wang
Zhigang Zeng
Ruixuan Li
44
2
0
08 Mar 2025
Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization
Wei Liu
Zhiying Deng
Zhongyu Niu
Jun Wang
Haozhao Wang
YuanKai Zhang
Ruixuan Li
42
4
0
08 Oct 2024
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
43
1
0
23 Jul 2024
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
61
1
0
24 Jun 2024
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
44
1
0
03 Apr 2024
Using Interpretation Methods for Model Enhancement
Zhuo Chen
Chengyue Jiang
Kewei Tu
19
2
0
02 Apr 2024
Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery
Linan Yue
Qi Liu
Yichao Du
Li Wang
Weibo Gao
Yanqing An
32
5
0
12 Mar 2024
Enhancing the Rationale-Input Alignment for Self-explaining Rationalization
Wei Liu
Haozhao Wang
Jun Wang
Zhiying Deng
Yuankai Zhang
Chengwei Wang
Ruixuan Li
32
9
0
07 Dec 2023
TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long Documents
James Enouen
Hootan Nakhost
Sayna Ebrahimi
Sercan Ö. Arik
Yan Liu
Tomas Pfister
33
4
0
03 Dec 2023
Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
41
3
0
03 Nov 2023
REFER: An End-to-end Rationale Extraction Framework for Explanation Regularization
Mohammad Reza Ghasemi Madani
Pasquale Minervini
27
4
0
22 Oct 2023
D-Separation for Causal Self-Explanation
Wei Liu
Jun Wang
Haozhao Wang
Rui Li
Zhiying Deng
YuanKai Zhang
Yang Qiu
82
14
0
23 Sep 2023
Explainability for Large Language Models: A Survey
Haiyan Zhao
Hanjie Chen
Fan Yang
Ninghao Liu
Huiqi Deng
Hengyi Cai
Shuaiqiang Wang
Dawei Yin
Mengnan Du
LRM
26
409
0
02 Sep 2023
Towards Trustworthy Explanation: On Causal Rationalization
Wenbo Zhang
Tong Wu
Yunlong Wang
Yong Cai
Hengrui Cai
CML
21
18
0
25 Jun 2023
Give Me More Details: Improving Fact-Checking with Latent Retrieval
Xuming Hu
Guan-Huei Wu
Zhijiang Guo
Philip S. Yu
HILM
37
4
0
25 May 2023
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint
Wei Liu
Jun Wang
Haozhao Wang
Rui Li
Yang Qiu
Yuankai Zhang
Jie Han
Yixiong Zou
41
12
0
23 May 2023
MGR: Multi-generator Based Rationalization
Wei Liu
Haozhao Wang
Jun Wang
Rui Li
Xinyang Li
Yuankai Zhang
Yang Qiu
23
7
0
08 May 2023
Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations
Bingsheng Yao
Prithviraj Sen
Lucian Popa
James A. Hendler
Dakuo Wang
XAI
ELM
FAtt
23
10
0
04 May 2023
Think Rationally about What You See: Continuous Rationale Extraction for Relation Extraction
Xuming Hu
Zhaochen Hong
Chenwei Zhang
Irwin King
Philip S. Yu
33
9
0
02 May 2023
ExClaim: Explainable Neural Claim Verification Using Rationalization
Sai Gurrapu
Lifu Huang
Feras A. Batarseh
AAML
24
8
0
21 Jan 2023
Rationalization for Explainable NLP: A Survey
Sai Gurrapu
Ajay Kulkarni
Lifu Huang
Ismini Lourentzou
Laura J. Freeman
Feras A. Batarseh
36
31
0
21 Jan 2023
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
Peifeng Wang
Aaron Chan
Filip Ilievski
Muhao Chen
Xiang Ren
LRM
ReLM
21
59
0
03 Nov 2022
XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
Dong-Ho Lee
Akshen Kadakia
Brihi Joshi
Aaron Chan
Ziyi Liu
...
Takashi Shibuya
Ryosuke Mitani
Toshiyuki Sekiya
Jay Pujara
Xiang Ren
LRM
40
9
0
30 Oct 2022
REV: Information-Theoretic Evaluation of Free-Text Rationales
Hanjie Chen
Faeze Brahman
Xiang Ren
Yangfeng Ji
Yejin Choi
Swabha Swayamdipta
86
23
0
10 Oct 2022
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales
Aaron Chan
Shaoliang Nie
Liang Tan
Xiaochang Peng
Hamed Firooz
Maziar Sanjabi
Xiang Ren
40
9
0
02 Jul 2022
ER-Test: Evaluating Explanation Regularization Methods for Language Models
Brihi Joshi
Aaron Chan
Ziyi Liu
Shaoliang Nie
Maziar Sanjabi
Hamed Firooz
Xiang Ren
AAML
35
6
0
25 May 2022
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Qinyuan Ye
Bill Yuchen Lin
Xiang Ren
211
179
0
18 Apr 2021
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
280
2,015
0
28 Jul 2020
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
255
620
0
04 Dec 2018
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1