ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.09219
  4. Cited By
Explainable Active Learning (XAL): An Empirical Study of How Local
  Explanations Impact Annotator Experience

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

24 January 2020
Bhavya Ghai
Q. V. Liao
Yunfeng Zhang
Rachel K. E. Bellamy
Klaus Mueller
ArXivPDFHTML

Papers citing "Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience"

10 / 10 papers shown
Title
Fine-tuning of explainable CNNs for skin lesion classification based on
  dermatologists' feedback towards increasing trust
Fine-tuning of explainable CNNs for skin lesion classification based on dermatologists' feedback towards increasing trust
Md Abdul Kadir
Fabrizio Nunnari
Daniel Sonntag
FAtt
16
1
0
03 Apr 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
44
105
0
18 Jan 2023
A Human-ML Collaboration Framework for Improving Video Content Reviews
A Human-ML Collaboration Framework for Improving Video Content Reviews
Meghana Deodhar
Xiao Ma
Yixin Cai
Alex Koes
Alex Beutel
Jilin Chen
29
3
0
18 Oct 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
43
16
0
13 Jun 2022
Perspectives on Incorporating Expert Feedback into Model Updates
Perspectives on Incorporating Expert Feedback into Model Updates
Valerie Chen
Umang Bhatt
Hoda Heidari
Adrian Weller
Ameet Talwalkar
35
11
0
13 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
29
130
0
25 Apr 2022
Towards a Science of Human-AI Decision Making: A Survey of Empirical
  Studies
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai
Chacha Chen
Q. V. Liao
Alison Smith-Renner
Chenhao Tan
33
186
0
21 Dec 2021
Facilitating Knowledge Sharing from Domain Experts to Data Scientists
  for Building NLP Models
Facilitating Knowledge Sharing from Domain Experts to Data Scientists for Building NLP Models
Soya Park
A. Wang
B. Kawas
Q. V. Liao
David Piorkowski
Marina Danilevsky
62
55
0
29 Jan 2021
Soliciting Human-in-the-Loop User Feedback for Interactive Machine
  Learning Reduces User Trust and Impressions of Model Accuracy
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
Donald R. Honeycutt
Mahsan Nourani
Eric D. Ragan
HAI
30
61
0
28 Aug 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,696
0
28 Feb 2017
1