ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.14973
  4. Cited By
Explaining the Road Not Taken

Explaining the Road Not Taken

27 March 2021
Hua Shen
Ting-Hao 'Kenneth' Huang
    FAtt
    XAI
ArXivPDFHTML

Papers citing "Explaining the Road Not Taken"

12 / 12 papers shown
Title
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
62
122
0
27 Dec 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
70
177
0
12 Oct 2020
FIND: Human-in-the-Loop Debugging Deep Text Classifiers
FIND: Human-in-the-Loop Debugging Deep Text Classifiers
Piyawat Lertvittayakumjorn
Lucia Specia
Francesca Toni
19
54
0
10 Oct 2020
How Useful Are the Machine-Generated Interpretations to General Users? A
  Human Evaluation on Guessing the Incorrectly Predicted Labels
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
HAI
34
56
0
26 Aug 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through
  Influence Functions
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
46
167
0
14 May 2020
ViCE: Visual Counterfactual Explanations for Machine Learning Models
ViCE: Visual Counterfactual Explanations for Machine Learning Models
Oscar Gomez
Steffen Holter
Jun Yuan
E. Bertini
AAML
75
96
0
05 Mar 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
91
709
0
08 Jan 2020
Learning the Difference that Makes a Difference with
  Counterfactually-Augmented Data
Learning the Difference that Makes a Difference with Counterfactually-Augmented Data
Divyansh Kaushik
Eduard H. Hovy
Zachary Chase Lipton
CML
61
567
0
26 Sep 2019
Self-Critical Reasoning for Robust Visual Question Answering
Self-Critical Reasoning for Robust Visual Question Answering
Jialin Wu
Raymond J. Mooney
OOD
NAI
61
161
0
24 May 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
383
634
0
04 Dec 2018
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
142
2,854
0
14 Mar 2017
Interpreting Neural Networks to Improve Politeness Comprehension
Interpreting Neural Networks to Improve Politeness Comprehension
Malika Aubakirova
Joey Tianyi Zhou
FAtt
MILM
35
56
0
09 Oct 2016
1