ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.03065
  4. Cited By
Make Up Your Mind! Adversarial Generation of Inconsistent Natural
  Language Explanations
v1v2v3 (latest)

Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations

7 October 2019
Oana-Maria Camburu
Brendan Shillingford
Pasquale Minervini
Thomas Lukasiewicz
Phil Blunsom
    AAMLGAN
ArXiv (abs)PDFHTML

Papers citing "Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations"

23 / 73 papers shown
Title
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
178
422
0
20 Jan 2022
Few-Shot Out-of-Domain Transfer Learning of Natural Language
  Explanations in a Label-Abundant Setup
Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations in a Label-Abundant Setup
Yordan Yordanov
Vid Kocijan
Thomas Lukasiewicz
Oana-Maria Camburu
87
19
0
12 Dec 2021
Tribrid: Stance Classification with Neural Inconsistency Detection
Tribrid: Stance Classification with Neural Inconsistency Detection
Song Yang
Jacopo Urbani
41
6
0
14 Sep 2021
Are Training Resources Insufficient? Predict First Then Explain!
Are Training Resources Insufficient? Predict First Then Explain!
Myeongjun Jang
Thomas Lukasiewicz
LRM
59
7
0
29 Aug 2021
Accurate, yet inconsistent? Consistency Analysis on Language
  Understanding Models
Accurate, yet inconsistent? Consistency Analysis on Language Understanding Models
Myeongjun Jang
D. Kwon
Thomas Lukasiewicz
71
13
0
15 Aug 2021
Knowledge-Grounded Self-Rationalization via Extractive and Natural
  Language Explanations
Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations
Bodhisattwa Prasad Majumder
Oana-Maria Camburu
Thomas Lukasiewicz
Julian McAuley
96
36
0
25 Jun 2021
Prompting Contrastive Explanations for Commonsense Reasoning Tasks
Prompting Contrastive Explanations for Commonsense Reasoning Tasks
Bhargavi Paranjape
Julian Michael
Marjan Ghazvininejad
Luke Zettlemoyer
Hannaneh Hajishirzi
ReLMLRM
76
68
0
12 Jun 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
138
142
0
17 May 2021
e-ViL: A Dataset and Benchmark for Natural Language Explanations in
  Vision-Language Tasks
e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Maxime Kayser
Oana-Maria Camburu
Leonard Salewski
Cornelius Emde
Virginie Do
Zeynep Akata
Thomas Lukasiewicz
VLM
102
101
0
08 May 2021
Do Natural Language Explanations Represent Valid Logical Arguments?
  Verifying Entailment in Explainable NLI Gold Standards
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards
Marco Valentino
Ian Pratt-Hartman
André Freitas
XAILRM
80
11
0
05 May 2021
Explainability-aided Domain Generalization for Image Classification
Explainability-aided Domain Generalization for Image Classification
Robin M. Schmidt
FAttOOD
51
1
0
05 Apr 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
120
50
0
20 Mar 2021
Token-Modification Adversarial Attacks for Natural Language Processing:
  A Survey
Token-Modification Adversarial Attacks for Natural Language Processing: A Survey
Tom Roth
Yansong Gao
A. Abuadbba
Surya Nepal
Wei Liu
AAML
106
12
0
01 Mar 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural
  Language Processing
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Sarah Wiegreffe
Ana Marasović
XAI
78
146
0
24 Feb 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
335
371
0
01 Feb 2021
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale
  Generation
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation
Kushal Lakhotia
Bhargavi Paranjape
Asish Ghoshal
Wen-tau Yih
Yashar Mehdad
Srini Iyer
63
28
0
31 Dec 2020
LIREx: Augmenting Language Inference with Relevant Explanation
LIREx: Augmenting Language Inference with Relevant Explanation
Xinyan Zhao
V. Vydiswaran
LRM
115
40
0
16 Dec 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAIFAtt
108
26
0
04 Oct 2020
A Survey on Explainability in Machine Reading Comprehension
A Survey on Explainability in Machine Reading Comprehension
Mokanarangan Thayaparan
Marco Valentino
André Freitas
FaML
108
49
0
01 Oct 2020
QED: A Framework and Dataset for Explanations in Question Answering
QED: A Framework and Dataset for Explanations in Question Answering
Matthew Lamm
J. Palomaki
Chris Alberti
D. Andor
Eunsol Choi
Livio Baldini Soares
Michael Collins
70
69
0
08 Sep 2020
NILE : Natural Language Inference with Faithful Natural Language
  Explanations
NILE : Natural Language Inference with Faithful Natural Language Explanations
Sawan Kumar
Partha P. Talukdar
XAILRM
113
163
0
25 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAMLXAI
111
379
0
30 Apr 2020
Generating Natural Adversarial Examples
Generating Natural Adversarial Examples
Zhengli Zhao
Dheeru Dua
Sameer Singh
GANAAML
194
601
0
31 Oct 2017
Previous
12