ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.07901
  4. Cited By
On Human Predictions with Explanations and Predictions of Machine
  Learning Models: A Case Study on Deception Detection

On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection

19 November 2018
Vivian Lai
Chenhao Tan
ArXivPDFHTML

Papers citing "On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection"

50 / 62 papers shown
Title
Eye Movements as Indicators of Deception: A Machine Learning Approach
Eye Movements as Indicators of Deception: A Machine Learning Approach
Valentin Foucher
Santiago de Leon-Martinez
Robert Moro
23
0
0
05 May 2025
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Federico Maria Cau
Lucio Davide Spano
31
0
0
02 May 2025
The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making
The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making
Takehiro Takayanagi
Ryuji Hashimoto
Chung-Chi Chen
Kiyoshi Izumi
52
0
0
21 Feb 2025
The Value of Information in Human-AI Decision-making
The Value of Information in Human-AI Decision-making
Ziyang Guo
Yifan Wu
Jason D. Hartline
Jessica Hullman
FAtt
59
0
0
10 Feb 2025
Personalized Help for Optimizing Low-Skilled Users' Strategy
Personalized Help for Optimizing Low-Skilled Users' Strategy
Feng Gu
Wichayaporn Wongkamjan
Jordan Boyd-Graber
Jonathan K. Kummerfeld
Denis Peskoff
Jonathan May
28
0
0
14 Nov 2024
Unexploited Information Value in Human-AI Collaboration
Unexploited Information Value in Human-AI Collaboration
Ziyang Guo
Yifan Wu
Jason D. Hartline
Jessica Hullman
31
1
0
03 Nov 2024
Interactive Example-based Explanations to Improve Health Professionals'
  Onboarding with AI for Human-AI Collaborative Decision Making
Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making
Min Hun Lee
Renee Bao Xuan Ng
Silvana Xin Yi Choo
S. Thilarajah
26
0
0
24 Sep 2024
Misfitting With AI: How Blind People Verify and Contest AI Errors
Misfitting With AI: How Blind People Verify and Contest AI Errors
Rahaf Alharbi
P. Lor
Jaylin Herskovitz
S. Schoenebeck
Robin Brewer
33
10
0
13 Aug 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
38
10
0
27 Jul 2024
Whether to trust: the ML leap of faith
Whether to trust: the ML leap of faith
Tory Frame
Sahraoui Dhelim
George Stothart
E. Coulthard
38
0
0
17 Jul 2024
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Shuai Ma
Qiaoyi Chen
Xinru Wang
Chengbo Zheng
Zhenhui Peng
Ming Yin
Xiaojuan Ma
ELM
26
20
0
25 Mar 2024
"Are You Really Sure?" Understanding the Effects of Human
  Self-Confidence Calibration in AI-Assisted Decision Making
"Are You Really Sure?" Understanding the Effects of Human Self-Confidence Calibration in AI-Assisted Decision Making
Shuai Ma
Xinru Wang
Ying Lei
Chuhan Shi
Ming Yin
Xiaojuan Ma
29
24
0
14 Mar 2024
Software Doping Analysis for Human Oversight
Software Doping Analysis for Human Oversight
Sebastian Biewer
Kevin Baum
Sarah Sterz
Holger Hermanns
Sven Hetmank
Markus Langer
Anne Lauber-Rönsberg
Franz Lehr
25
4
0
11 Aug 2023
Pink-Eggs Dataset V1: A Step Toward Invasive Species Management Using
  Deep Learning Embedded Solutions
Pink-Eggs Dataset V1: A Step Toward Invasive Species Management Using Deep Learning Embedded Solutions
Di Xu
Yang Zhao
Xiang Hao
Xin Meng
39
2
0
16 May 2023
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
24
61
0
12 May 2023
Artifact magnification on deepfake videos increases human detection and
  subjective confidence
Artifact magnification on deepfake videos increases human detection and subjective confidence
Emilie Josephs
Camilo Luciano Fosco
A. Oliva
27
6
0
10 Apr 2023
Towards Explainable AI Writing Assistants for Non-native English
  Speakers
Towards Explainable AI Writing Assistants for Non-native English Speakers
Yewon Kim
Mina Lee
Donghwi Kim
Sung-Ju Lee
11
4
0
05 Apr 2023
Human-AI Collaboration: The Effect of AI Delegation on Human Task
  Performance and Task Satisfaction
Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task Satisfaction
Patrick Hemmer
Monika Westphal
Max Schemmer
S. Vetter
Michael Vossing
G. Satzger
44
42
0
16 Mar 2023
Learning Human-Compatible Representations for Case-Based Decision
  Support
Learning Human-Compatible Representations for Case-Based Decision Support
Han Liu
Yizhou Tian
Chacha Chen
Shi Feng
Yuxin Chen
Chenhao Tan
25
5
0
06 Mar 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
21
97
0
04 Feb 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
38
104
0
18 Jan 2023
Improving Human-AI Collaboration With Descriptions of AI Behavior
Improving Human-AI Collaboration With Descriptions of AI Behavior
Ángel Alexander Cabrera
Adam Perer
Jason I. Hong
24
34
0
06 Jan 2023
A Human-ML Collaboration Framework for Improving Video Content Reviews
A Human-ML Collaboration Framework for Improving Video Content Reviews
Meghana Deodhar
Xiao Ma
Yixin Cai
Alex Koes
Alex Beutel
Jilin Chen
29
3
0
18 Oct 2022
Learning When to Advise Human Decision Makers
Learning When to Advise Human Decision Makers
Gali Noti
Yiling Chen
41
15
0
27 Sep 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
45
46
0
23 Sep 2022
Advancing Human-AI Complementarity: The Impact of User Expertise and
  Algorithmic Tuning on Joint Decision Making
Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making
K. Inkpen
Shreya Chappidi
Keri Mallari
Besmira Nushi
Divya Ramesh
Pietro Michelucci
Vani Mandava
Libuvse Hannah Vepvrek
Gabrielle Quinn
28
45
0
16 Aug 2022
Beware the Rationalization Trap! When Language Model Explainability
  Diverges from our Mental Models of Language
Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language
R. Sevastjanova
Mennatallah El-Assady
LRM
32
9
0
14 Jul 2022
Justifying Social-Choice Mechanism Outcome for Improving Participant
  Satisfaction
Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction
Sharadhi Alape Suryanarayana
David Sarne
Bar-Ilan
11
7
0
24 May 2022
Argumentative Explanations for Pattern-Based Text Classifiers
Argumentative Explanations for Pattern-Based Text Classifiers
Piyawat Lertvittayakumjorn
Francesca Toni
37
4
0
22 May 2022
"If it didn't happen, why would I change my decision?": How Judges
  Respond to Counterfactual Explanations for the Public Safety Assessment
"If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment
Yaniv Yacoby
Ben Green
Christopher L. Griffin
Finale Doshi Velez
19
16
0
11 May 2022
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in
  Human-AI Decision-Making
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
19
55
0
10 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
18
130
0
25 Apr 2022
Single-Turn Debate Does Not Help Humans Answer Hard
  Reading-Comprehension Questions
Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions
Alicia Parrish
H. Trivedi
Ethan Perez
Angelica Chen
Nikita Nangia
Jason Phang
Sam Bowman
17
14
0
11 Apr 2022
Robustness and Usefulness in AI Explanation Methods
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
28
1
0
07 Mar 2022
Better Together? An Evaluation of AI-Supported Code Translation
Better Together? An Evaluation of AI-Supported Code Translation
Justin D. Weisz
Michael J. Muller
Steven I. Ross
Fernando Martinez
Stephanie Houde
Mayank Agarwal
Kartik Talamadupula
John T. Richards
29
67
0
15 Feb 2022
Causal effect of racial bias in data and machine learning algorithms on user persuasiveness & discriminatory decision making: An Empirical Study
Kinshuk Sengupta
Praveen Ranjan Srivastava
28
6
0
22 Jan 2022
Explain, Edit, and Understand: Rethinking User Study Design for
  Evaluating Model Explanations
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
35
38
0
17 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
Interpreting Deep Learning Models in Natural Language Processing: A
  Review
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
17
44
0
20 Oct 2021
Intelligent Decision Assistance Versus Automated Decision-Making:
  Enhancing Knowledge Work Through Explainable Artificial Intelligence
Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence
Max Schemmer
Niklas Kühl
G. Satzger
16
13
0
28 Sep 2021
Decision-Focused Summarization
Decision-Focused Summarization
Chao-Chun Hsu
Chenhao Tan
31
16
0
14 Sep 2021
The Flaws of Policies Requiring Human Oversight of Government Algorithms
The Flaws of Policies Requiring Human Oversight of Government Algorithms
Ben Green
21
111
0
10 Sep 2021
The Impact of Algorithmic Risk Assessments on Human Predictions and its
  Analysis via Crowdsourcing Studies
The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies
Riccardo Fogliato
Alexandra Chouldechova
Zachary Chase Lipton
24
31
0
03 Sep 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
37
79
0
30 Apr 2021
Increasing the Speed and Accuracy of Data LabelingThrough an AI Assisted
  Interface
Increasing the Speed and Accuracy of Data LabelingThrough an AI Assisted Interface
Michael Desmond
Zahra Ashktorab
Michelle Brachman
Kristina Brimijoin
E. Duesterwald
...
Catherine Finegan-Dollak
Michael J. Muller
N. Joshi
Qian Pan
Aabhas Sharma
21
50
0
09 Apr 2021
To Trust or to Think: Cognitive Forcing Functions Can Reduce
  Overreliance on AI in AI-assisted Decision-making
To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Zana Buçinca
M. Malaya
Krzysztof Z. Gajos
28
299
0
19 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
37
25
0
17 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
119
0
21 Jan 2021
12
Next