ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.09770
  4. Cited By
ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to
  Support Human-AI Scientific Writing
v1v2v3v4v5v6 (latest)

ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing

16 May 2023
Hua Shen
Huang Chieh-Yang
Tongshuang Wu
Ting-Hao 'Kenneth' Huang
ArXiv (abs)PDFHTML

Papers citing "ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing"

33 / 33 papers shown
Title
Co-Writing with AI, on Human Terms: Aligning Research with User Demands Across the Writing Process
Co-Writing with AI, on Human Terms: Aligning Research with User Demands Across the Writing Process
Mohi Reza
Jeb Mitchell
Peter Dushniku
Nathan Laundry
J. Williams
Anastasia Kuzminykh
72
1
0
16 Apr 2025
Are Shortest Rationales the Best Explanations for Human Understanding?
Are Shortest Rationales the Best Explanations for Human Understanding?
Hua Shen
Tongshuang Wu
Wenbo Guo
Ting-Hao 'Kenneth' Huang
FAtt
60
11
0
16 Mar 2022
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for
  Exploring Language Model Capabilities
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities
Mina Lee
Percy Liang
Qian Yang
HAI
76
373
0
18 Jan 2022
Interpretable Directed Diversity: Leveraging Model Explanations for
  Iterative Crowd Ideation
Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Yunlong Wang
Priyadarshini Venkatesh
Brian Y. Lim
75
20
0
21 Sep 2021
Wordcraft: a Human-AI Collaborative Editor for Story Writing
Wordcraft: a Human-AI Collaborative Editor for Story Writing
Andy Coenen
Luke Davis
Daphne Ippolito
Emily Reif
Ann Yuan
LLMAG
96
71
0
15 Jul 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
98
79
0
30 Apr 2021
RECAST: Enabling User Recourse and Interpretability of Toxicity
  Detection Models with Interactive Visualization
RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization
Austin P. Wright
Omar Shaikh
Haekyu Park
Will Epperson
Muhammed Ahmed
Stephane Pinel
Duen Horng Chau
Diyi Yang
37
22
0
08 Feb 2021
Can We Automate Scientific Reviewing?
Can We Automate Scientific Reviewing?
Weizhe Yuan
Pengfei Liu
Graham Neubig
127
88
0
30 Jan 2021
Dissonance Between Human and Machine Understanding
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
111
74
0
18 Jan 2021
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Ana Valeria González
Gagan Bansal
Angela Fan
Robin Jia
Yashar Mehdad
Srini Iyer
AAML
76
24
0
30 Dec 2020
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
71
122
0
27 Dec 2020
Interpretation of NLP models through input marginalization
Interpretation of NLP models through input marginalization
Siwon Kim
Jihun Yi
Eunji Kim
Sungroh Yoon
MILMFAtt
75
60
0
27 Oct 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAILRM
95
177
0
12 Oct 2020
How Useful Are the Machine-Generated Interpretations to General Users? A
  Human Evaluation on Guessing the Incorrectly Predicted Labels
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen
Ting-Hao 'Kenneth' Huang
FAttHAI
58
56
0
26 Aug 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
81
596
0
26 Jun 2020
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Pengcheng He
Xiaodong Liu
Jianfeng Gao
Weizhu Chen
AAML
161
2,737
0
05 Jun 2020
Evaluating Explainable AI: Which Algorithmic Explanations Help Users
  Predict Model Behavior?
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase
Joey Tianyi Zhou
FAtt
75
303
0
04 May 2020
Towards a Human-like Open-Domain Chatbot
Towards a Human-like Open-Domain Chatbot
Daniel De Freitas
Minh-Thang Luong
David R. So
Jamie Hall
Noah Fiedel
...
Zi Yang
Apoorv Kulshreshtha
Gaurav Nemade
Yifeng Lu
Quoc V. Le
114
938
0
27 Jan 2020
Explainable Active Learning (XAL): An Empirical Study of How Local
  Explanations Impact Annotator Experience
Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Bhavya Ghai
Q. V. Liao
Yunfeng Zhang
Rachel K. E. Bellamy
Klaus Mueller
65
29
0
24 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
127
716
0
08 Jan 2020
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Eric Wallace
Jens Tuyls
Junlin Wang
Sanjay Subramanian
Matt Gardner
Sameer Singh
MILM
63
138
0
19 Sep 2019
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers
Iryna Gurevych
1.3K
12,226
0
27 Aug 2019
Attention is not not Explanation
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAIAAMLFAtt
120
909
0
13 Aug 2019
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Nazneen Rajani
Bryan McCann
Caiming Xiong
R. Socher
ReLMLRM
82
565
0
06 Jun 2019
Do Neural Dialog Systems Use the Conversation History Effectively? An
  Empirical Study
Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study
Chinnadhurai Sankar
Sandeep Subramanian
C. Pal
A. Chandar
Yoshua Bengio
40
121
0
04 Jun 2019
SciBERT: A Pretrained Language Model for Scientific Text
SciBERT: A Pretrained Language Model for Scientific Text
Iz Beltagy
Kyle Lo
Arman Cohan
137
2,974
0
26 Mar 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
414
638
0
04 Dec 2018
CoQA: A Conversational Question Answering Challenge
CoQA: A Conversational Question Answering Challenge
Siva Reddy
Danqi Chen
Christopher D. Manning
RALMHAI
98
1,205
0
21 Aug 2018
Datasheets for Datasets
Datasheets for Datasets
Timnit Gebru
Jamie Morgenstern
Briana Vecchione
Jennifer Wortman Vaughan
Hanna M. Wallach
Hal Daumé
Kate Crawford
264
2,184
0
23 Mar 2018
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to
  Stop Worrying and Love the Social and Behavioural Sciences
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences
Tim Miller
Piers Howe
L. Sonenberg
AI4TSSyDa
53
373
0
02 Dec 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
245
4,265
0
22 Jun 2017
ParlAI: A Dialog Research Software Platform
ParlAI: A Dialog Research Software Platform
Alexander H. Miller
Will Feng
Adam Fisch
Jiasen Lu
Dhruv Batra
Antoine Bordes
Devi Parikh
Jason Weston
77
376
0
18 May 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
16,990
0
16 Feb 2016
1