ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.07901
  4. Cited By
On Human Predictions with Explanations and Predictions of Machine
  Learning Models: A Case Study on Deception Detection

On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection

19 November 2018
Vivian Lai
Chenhao Tan
ArXivPDFHTML

Papers citing "On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection"

16 / 66 papers shown
Title
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
37
25
0
17 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
119
0
21 Jan 2021
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
  Decision-making
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making
Charvi Rastogi
Yunfeng Zhang
Dennis L. Wei
Kush R. Varshney
Amit Dhurandhar
Richard J. Tomsett
HAI
32
108
0
15 Oct 2020
The Impact of Explanations on AI Competency Prediction in VQA
The Impact of Explanations on AI Competency Prediction in VQA
Kamran Alipour
Arijit Ray
Xiaoyu Lin
J. Schulze
Yi Yao
Giedrius Burachas
22
9
0
02 Jul 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
33
578
0
26 Jun 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
22
93
0
19 Jun 2020
Misplaced Trust: Measuring the Interference of Machine Learning in Human
  Decision-Making
Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Harini Suresh
Natalie Lao
Ilaria Liccardi
8
49
0
22 May 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
22
197
0
03 Feb 2020
Deceptive AI Explanations: Creation and Detection
Deceptive AI Explanations: Creation and Detection
Johannes Schneider
Christian Meske
Michalis Vlachos
14
28
0
21 Jan 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
  for Humans
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
24
138
0
14 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
52
703
0
08 Jan 2020
Effect of Confidence and Explanation on Accuracy and Trust Calibration
  in AI-Assisted Decision Making
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Yunfeng Zhang
Q. V. Liao
Rachel K. E. Bellamy
17
660
0
07 Jan 2020
Visual Interaction with Deep Learning Models through Collaborative
  Semantic Inference
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
Sebastian Gehrmann
Hendrik Strobelt
Robert Krüger
Hanspeter Pfister
Alexander M. Rush
HAI
21
57
0
24 Jul 2019
Learning Representations by Humans, for Humans
Learning Representations by Humans, for Humans
Sophie Hilgard
Nir Rosenfeld
M. Banaji
Jack Cao
David C. Parkes
OCL
HAI
AI4CE
28
29
0
29 May 2019
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of
  Task Delegability
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability
Brian Lubars
Chenhao Tan
19
73
0
08 Feb 2019
Previous
12