ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.08301
  4. Cited By
How to Support Users in Understanding Intelligent Systems? Structuring
  the Discussion

How to Support Users in Understanding Intelligent Systems? Structuring the Discussion

22 January 2020
Malin Eiband
Daniel Buschek
H. Hussmann
ArXivPDFHTML

Papers citing "How to Support Users in Understanding Intelligent Systems? Structuring the Discussion"

11 / 11 papers shown
Title
Charting the Sociotechnical Gap in Explainable AI: A Framework to
  Address the Gap in XAI
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
18
57
0
01 Feb 2023
Adaptive user interfaces in systems targeting chronic disease: a
  systematic literature review
Adaptive user interfaces in systems targeting chronic disease: a systematic literature review
Wen Wang
Hourieh Khalajzadeh
Anuradha Madugalla
Jennifer McIntosh
Humphrey O. Obie
19
7
0
17 Nov 2022
Modeling Human Behavior Part I -- Learning and Belief Approaches
Modeling Human Behavior Part I -- Learning and Belief Approaches
Andrew Fuchs
A. Passarella
M. Conti
35
7
0
13 May 2022
Modeling Human Behavior Part II -- Cognitive approaches and Uncertainty
Modeling Human Behavior Part II -- Cognitive approaches and Uncertainty
Andrew Fuchs
A. Passarella
M. Conti
17
4
0
13 May 2022
Designing Creative AI Partners with COFI: A Framework for Modeling
  Interaction in Human-AI Co-Creative Systems
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems
Jeba Rezwana
M. Maher
17
141
0
15 Apr 2022
A Cognitive Framework for Delegation Between Error-Prone AI and Human
  Agents
A Cognitive Framework for Delegation Between Error-Prone AI and Human Agents
Andrew Fuchs
A. Passarella
M. Conti
18
7
0
06 Apr 2022
GANSlider: How Users Control Generative Models for Images using Multiple
  Sliders with and without Feedforward Information
GANSlider: How Users Control Generative Models for Images using Multiple Sliders with and without Feedforward Information
Hai Dang
Lukas Mecke
Daniel Buschek
28
31
0
02 Feb 2022
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Upol Ehsan
Mark O. Riedl
XAI
SILM
59
58
0
26 Sep 2021
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
32
85
0
28 Jul 2021
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
742
0
13 Dec 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1