ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.10424
  4. Cited By
Unifying Model Explainability and Robustness for Joint Text
  Classification and Rationale Extraction

Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction

20 December 2021
Dongfang Li
Baotian Hu
Qingcai Chen
Tujie Xu
Jingcong Tao
Yunan Zhang
ArXivPDFHTML

Papers citing "Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction"

23 / 23 papers shown
Title
Masked Conditional Random Fields for Sequence Labeling
Masked Conditional Random Fields for Sequence Labeling
Tianwen Wei
Jianwei Qi
Shenghuang He
Songtao Sun
37
18
0
19 Mar 2021
Learning from the Best: Rationalizing Prediction by Adversarial
  Information Calibration
Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration
Lei Sha
Oana-Maria Camburu
Thomas Lukasiewicz
162
37
0
16 Dec 2020
Weakly- and Semi-supervised Evidence Extraction
Weakly- and Semi-supervised Evidence Extraction
Danish Pruthi
Bhuwan Dhingra
Graham Neubig
Zachary Chase Lipton
54
23
0
03 Nov 2020
Posterior Differential Regularization with f-divergence for Improving
  Model Robustness
Posterior Differential Regularization with f-divergence for Improving Model Robustness
Hao Cheng
Xiaodong Liu
L. Pereira
Yaoliang Yu
Jianfeng Gao
265
31
0
23 Oct 2020
Evaluating and Characterizing Human Rationales
Evaluating and Characterizing Human Rationales
Samuel Carton
Anirudh Rathore
Chenhao Tan
53
49
0
09 Oct 2020
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Marco Tulio Ribeiro
Tongshuang Wu
Carlos Guestrin
Sameer Singh
ELM
208
1,103
0
08 May 2020
An Information Bottleneck Approach for Controlling Conciseness in
  Rationale Extraction
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Bhargavi Paranjape
Mandar Joshi
John Thickstun
Hannaneh Hajishirzi
Luke Zettlemoyer
57
101
0
01 May 2020
Learning to Faithfully Rationalize by Construction
Learning to Faithfully Rationalize by Construction
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
75
163
0
30 Apr 2020
Adversarial Robustness on In- and Out-Distribution Improves
  Explainability
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
154
102
0
20 Mar 2020
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language
  Models through Principled Regularized Optimization
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Haoming Jiang
Pengcheng He
Weizhu Chen
Xiaodong Liu
Jianfeng Gao
T. Zhao
86
561
0
08 Nov 2019
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on
  Text Classification and Entailment
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
Di Jin
Zhijing Jin
Qiufeng Wang
Peter Szolovits
SILM
AAML
174
1,077
0
27 Jul 2019
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
58
161
0
10 May 2019
Inferring Which Medical Treatments Work from Reports of Clinical Trials
Inferring Which Medical Treatments Work from Reports of Clinical Trials
Eric P. Lehman
Jay DeYoung
Regina Barzilay
Byron C. Wallace
87
116
0
02 Apr 2019
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence
  Models
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models
Paul Michel
Xian Li
Graham Neubig
J. Pino
AAML
65
136
0
15 Mar 2019
TextBugger: Generating Adversarial Text Against Real-world Applications
TextBugger: Generating Adversarial Text Against Real-world Applications
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
SILM
AAML
208
738
0
13 Dec 2018
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
408
637
0
04 Dec 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
76
526
0
21 Jun 2018
Explainable Artificial Intelligence: Understanding, Visualizing and
  Interpreting Deep Learning Models
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
Wojciech Samek
Thomas Wiegand
K. Müller
XAI
VLM
72
1,189
0
28 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
304
12,063
0
19 Jun 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
120
589
0
10 Mar 2017
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
180
3,699
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,976
0
16 Feb 2016
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
274
19,049
0
20 Dec 2014
1