ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.04155
  4. Cited By
Rationalizing Neural Predictions
v1v2 (latest)

Rationalizing Neural Predictions

13 June 2016
Tao Lei
Regina Barzilay
Tommi Jaakkola
ArXiv (abs)PDFHTML

Papers citing "Rationalizing Neural Predictions"

50 / 327 papers shown
Title
LAMP: Extracting Locally Linear Decision Surfaces from LLM World Models
LAMP: Extracting Locally Linear Decision Surfaces from LLM World Models
Ryan Chen
Youngmin Ko
Zeyu Zhang
Catherine Cho
Sunny Chung
Mauro Giuffré
Dennis L. Shung
Bradly C. Stadie
183
0
0
17 May 2025
Prediction via Shapley Value Regression
Prediction via Shapley Value Regression
Amr Alkhatib
Roman Bresson
Henrik Bostrom
Michalis Vazirgiannis
TDIFAtt
122
0
0
07 May 2025
Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets
Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets
Wen Liu
Zhongyu Niu
Lang Gao
Zhiying Deng
Jun Wang
Haobo Wang
Ruixuan Li
549
1
0
04 May 2025
AI Awareness
AI Awareness
Xianrui Li
Haoyuan Shi
Rongwu Xu
Wei Xu
140
0
0
25 Apr 2025
Learning to Discover Regulatory Elements for Gene Expression Prediction
Learning to Discover Regulatory Elements for Gene Expression Prediction
Xingyu Su
Haiyang Yu
D. Zhi
Shuiwang Ji
66
0
0
19 Feb 2025
B-cos LM: Efficiently Transforming Pre-trained Language Models for Improved Explainability
B-cos LM: Efficiently Transforming Pre-trained Language Models for Improved Explainability
Yifan Wang
Sukrut Rao
Ji-Ung Lee
Mayank Jobanputra
Vera Demberg
11
0
0
18 Feb 2025
BEExAI: Benchmark to Evaluate Explainable AI
BEExAI: Benchmark to Evaluate Explainable AI
Samuel Sithakoul
Sara Meftah
Clément Feutry
96
10
0
29 Jul 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
138
16
0
27 Jul 2024
CAVE: Controllable Authorship Verification Explanations
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
172
2
0
24 Jun 2024
Talking Nonsense: Probing Large Language Models' Understanding of
  Adversarial Gibberish Inputs
Talking Nonsense: Probing Large Language Models' Understanding of Adversarial Gibberish Inputs
Valeriia Cherepanova
James Zou
AAML
102
6
0
26 Apr 2024
Towards Faithful Explanations: Boosting Rationalization with Shortcuts
  Discovery
Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery
Linan Yue
Qi Liu
Yichao Du
Li Wang
Weibo Gao
Yanqing An
82
5
0
12 Mar 2024
ReAGent: A Model-agnostic Feature Attribution Method for Generative
  Language Models
ReAGent: A Model-agnostic Feature Attribution Method for Generative Language Models
Zhixue Zhao
Boxuan Shan
121
5
0
01 Feb 2024
Interpretable-by-Design Text Understanding with Iteratively Generated
  Concept Bottleneck
Interpretable-by-Design Text Understanding with Iteratively Generated Concept Bottleneck
Josh Magnus Ludan
Qing Lyu
Yue Yang
Liam Dugan
Mark Yatskar
Chris Callison-Burch
89
5
0
30 Oct 2023
Interpretable Graph Neural Networks for Tabular Data
Interpretable Graph Neural Networks for Tabular Data
Amr Alkhatib
Sofiane Ennadir
Henrik Bostrom
Michalis Vazirgiannis
LMTD
102
5
0
17 Aug 2023
Query Understanding in the Age of Large Language Models
Query Understanding in the Age of Large Language Models
Avishek Anand
Venktesh V
Abhijit Anand
Vinay Setty
LRM
111
5
0
28 Jun 2023
Encoding Time-Series Explanations through Self-Supervised Model Behavior
  Consistency
Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency
Owen Queen
Thomas Hartvigsen
Teddy Koker
Huan He
Theodoros Tsiligkaridis
Marinka Zitnik
AI4TS
98
21
0
03 Jun 2023
Explanation Graph Generation via Generative Pre-training over Synthetic
  Graphs
Explanation Graph Generation via Generative Pre-training over Synthetic Graphs
H. Cui
Sha Li
Yu Zhang
Qi Shi
129
1
0
01 Jun 2023
Unsupervised Selective Rationalization with Noise Injection
Unsupervised Selective Rationalization with Noise Injection
Adam Storek
Melanie Subbiah
Kathleen McKeown
61
4
0
27 May 2023
Give Me More Details: Improving Fact-Checking with Latent Retrieval
Give Me More Details: Improving Fact-Checking with Latent Retrieval
Xuming Hu
Guan-Huei Wu
Zhijiang Guo
Philip S. Yu
HILM
112
4
0
25 May 2023
Abductive Commonsense Reasoning Exploiting Mutually Exclusive
  Explanations
Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations
Wenting Zhao
Justin T. Chiu
Claire Cardie
Alexander M. Rush
LRM
69
20
0
24 May 2023
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible
  Lipschitz Restraint
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint
Wei Liu
Jun Wang
Yining Qi
Rui Li
Yang Qiu
Yuankai Zhang
Jie Han
Yixiong Zou
99
14
0
23 May 2023
Distilling ChatGPT for Explainable Automated Student Answer Assessment
Distilling ChatGPT for Explainable Automated Student Answer Assessment
Jiazheng Li
Lin Gui
Yuxiang Zhou
David West
Cesare Aloisi
Yulan He
79
28
0
22 May 2023
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop
  Fact Verification
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification
Jiasheng Si
Yingjie Zhu
Deyu Zhou
AAML
131
4
0
16 May 2023
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
Wei-Lin Chen
An-Zi Yen
Cheng-Kuang Wu
Hen-Hsen Huang
Hsin-Hsi Chen
ReLMLRM
54
11
0
12 May 2023
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable
  ELements for explaining neural net classifiers on NLP tasks
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Fanny Jourdan
Agustin Picard
Thomas Fel
Laurent Risser
Jean-Michel Loubes
Nicholas M. Asher
64
8
0
11 May 2023
ExClaim: Explainable Neural Claim Verification Using Rationalization
ExClaim: Explainable Neural Claim Verification Using Rationalization
Sai Gurrapu
Lifu Huang
Feras A. Batarseh
AAML
94
9
0
21 Jan 2023
Rationalizing Predictions by Adversarial Information Calibration
Rationalizing Predictions by Adversarial Information Calibration
Lei Sha
Oana-Maria Camburu
Thomas Lukasiewicz
72
7
0
15 Jan 2023
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
76
42
0
07 Dec 2022
Explainability as statistical inference
Explainability as statistical inference
Hugo Senetaire
Damien Garreau
J. Frellsen
Pierre-Alexandre Mattei
FAtt
96
4
0
06 Dec 2022
Exploring Faithful Rationale for Multi-hop Fact Verification via
  Salience-Aware Graph Learning
Exploring Faithful Rationale for Multi-hop Fact Verification via Salience-Aware Graph Learning
Jiasheng Si
Yingjie Zhu
Deyu Zhou
109
16
0
02 Dec 2022
SOLD: Sinhala Offensive Language Dataset
SOLD: Sinhala Offensive Language Dataset
Tharindu Ranasinghe
Isuri Anuradha
Damith Premasiri
Kanishka Silva
Hansi Hettiarachchi
Lasitha Uyangodage
Marcos Zampieri
108
8
0
01 Dec 2022
AutoCAD: Automatically Generating Counterfactuals for Mitigating
  Shortcut Learning
AutoCAD: Automatically Generating Counterfactuals for Mitigating Shortcut Learning
Jiaxin Wen
Yeshuang Zhu
Jinchao Zhang
Jie Zhou
Minlie Huang
CMLAAML
123
9
0
29 Nov 2022
Unsupervised Explanation Generation via Correct Instantiations
Unsupervised Explanation Generation via Correct Instantiations
Sijie Cheng
Zhiyong Wu
Jiangjie Chen
Zhixing Li
Yang Liu
Lingpeng Kong
ReLMLRM
80
5
0
21 Nov 2022
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency
  Methods
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
81
0
0
15 Nov 2022
GLUE-X: Evaluating Natural Language Understanding Models from an
  Out-of-distribution Generalization Perspective
GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective
Linyi Yang
Shuibai Zhang
Libo Qin
Yafu Li
Yidong Wang
Hanmeng Liu
Jindong Wang
Xingxu Xie
Yue Zhang
ELM
191
82
0
15 Nov 2022
Towards Human-Centred Explainability Benchmarks For Text Classification
Towards Human-Centred Explainability Benchmarks For Text Classification
Viktor Schlegel
Erick Mendez Guzman
Riza Batista-Navarro
96
5
0
10 Nov 2022
Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate
  Speech Detection
Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection
Jiyun Kim
Byounghan Lee
Kyung-ah Sohn
78
14
0
01 Nov 2022
R$^2$F: A General Retrieval, Reading and Fusion Framework for
  Document-level Natural Language Inference
R2^22F: A General Retrieval, Reading and Fusion Framework for Document-level Natural Language Inference
Hao Wang
Yixin Cao
Yangguang Li
Zhen Huang
Kun Wang
Jing Shao
FedML
73
0
0
22 Oct 2022
On the Impact of Temporal Concept Drift on Model Explanations
On the Impact of Temporal Concept Drift on Model Explanations
Zhixue Zhao
G. Chrysostomou
Kalina Bontcheva
Nikolaos Aletras
101
16
0
17 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He
Yu Wang
Julian McAuley
Bodhisattwa Prasad Majumder
60
19
0
14 Oct 2022
InterFair: Debiasing with Natural Language Feedback for Fair
  Interpretable Predictions
InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions
Bodhisattwa Prasad Majumder
Zexue He
Julian McAuley
63
6
0
14 Oct 2022
Self-explaining deep models with logic rule reasoning
Self-explaining deep models with logic rule reasoning
Seungeon Lee
Xiting Wang
Sungwon Han
Xiaoyuan Yi
Xing Xie
M. Cha
NAIReLMLRM
96
17
0
13 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
67
88
0
13 Oct 2022
Honest Students from Untrusted Teachers: Learning an Interpretable
  Question-Answering Pipeline from a Pretrained Language Model
Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model
Jacob Eisenstein
D. Andor
Bernd Bohnet
Michael Collins
David M. Mimno
LRM
311
25
0
05 Oct 2022
SIMPLE: A Gradient Estimator for $k$-Subset Sampling
SIMPLE: A Gradient Estimator for kkk-Subset Sampling
Kareem Ahmed
Zhe Zeng
Mathias Niepert
Guy Van den Broeck
BDL
140
27
0
04 Oct 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
246
121
0
22 Sep 2022
CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared
  Task
CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
Ricardo Rei
Marcos Vinícius Treviso
Nuno M. Guerreiro
Chrysoula Zerva
Ana C. Farinha
...
T. Glushkova
Duarte M. Alves
A. Lavie
Luísa Coheur
André F. T. Martins
225
159
0
13 Sep 2022
The Role of Explanatory Value in Natural Language Processing
The Role of Explanatory Value in Natural Language Processing
Kees van Deemter
XAI
47
0
0
13 Sep 2022
Adaptive Perturbation-Based Gradient Estimation for Discrete Latent
  Variable Models
Adaptive Perturbation-Based Gradient Estimation for Discrete Latent Variable Models
Pasquale Minervini
Luca Franceschi
Mathias Niepert
103
11
0
11 Sep 2022
A Survey on Measuring and Mitigating Reasoning Shortcuts in Machine
  Reading Comprehension
A Survey on Measuring and Mitigating Reasoning Shortcuts in Machine Reading Comprehension
Xanh Ho
Johannes Mario Meissner
Saku Sugawara
Akiko Aizawa
OffRL
92
4
0
05 Sep 2022
1234567
Next