ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.07487
  4. Cited By
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI

Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI

15 October 2020
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
ArXivPDFHTML

Papers citing "Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI"

50 / 52 papers shown
Title
Bridging Expertise Gaps: The Role of LLMs in Human-AI Collaboration for Cybersecurity
Bridging Expertise Gaps: The Role of LLMs in Human-AI Collaboration for Cybersecurity
Shahroz Tariq
Ronal Singh
Mohan Baruwal Chhetri
Surya Nepal
Cécile Paris
51
1
0
06 May 2025
Societal Alignment Frameworks Can Improve LLM Alignment
Karolina Stañczak
Nicholas Meade
Mehar Bhatia
Hattie Zhou
Konstantin Böttinger
...
Timothy P. Lillicrap
Ana Marasović
Sylvie Delacroix
Gillian K. Hadfield
Siva Reddy
144
0
0
27 Feb 2025
Verification and Validation for Trustworthy Scientific Machine Learning
Verification and Validation for Trustworthy Scientific Machine Learning
John D. Jakeman
Lorena A. Barba
J. Martins
Thomas O'Leary-Roseberry
AI4CE
56
0
0
21 Feb 2025
Constructing Fair Latent Space for Intersection of Fairness and Explainability
Constructing Fair Latent Space for Intersection of Fairness and Explainability
Hyungjun Joo
Hyeonggeun Han
Sehwan Kim
Sangwoo Hong
Jungwoo Lee
40
0
0
23 Dec 2024
Explainability Paths for Sustained Artistic Practice with AI
Explainability Paths for Sustained Artistic Practice with AI
Austin Tecks
Thomas Peschlow
Gabriel Vigliensoni
25
2
0
21 Jul 2024
Whether to trust: the ML leap of faith
Whether to trust: the ML leap of faith
Tory Frame
Sahraoui Dhelim
George Stothart
E. Coulthard
35
0
0
17 Jul 2024
Designing for Complementarity: A Conceptual Framework to Go Beyond the
  Current Paradigm of Using XAI in Healthcare
Designing for Complementarity: A Conceptual Framework to Go Beyond the Current Paradigm of Using XAI in Healthcare
Elisa Rubegni
Omran Ayoub
Stefania Maria Rita Rizzo
Marco Barbero
G. Bernegger
Francesca Faraci
Francesca Mangili
Emiliano Soldini
P. Trimboli
Alessandro Facchini
29
1
0
06 Apr 2024
The Duet of Representations and How Explanations Exacerbate It
The Duet of Representations and How Explanations Exacerbate It
Charles Wan
Rodrigo Belo
Leid Zejnilovic
Susana Lavado
CML
FAtt
16
1
0
13 Feb 2024
Uncertainty quantification for probabilistic machine learning in earth
  observation using conformal prediction
Uncertainty quantification for probabilistic machine learning in earth observation using conformal prediction
Geethen Singh
Glenn Moncrieff
Zander Venter
Kerry Cawse-Nicholson
Jasper Slingsby
Tamara B. Robinson
26
13
0
12 Jan 2024
Predictability and Comprehensibility in Post-Hoc XAI Methods: A
  User-Centered Analysis
Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Anahid N. Jalali
Bernhard Haslhofer
Simone Kriglstein
Andreas Rauber
FAtt
24
4
0
21 Sep 2023
Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for
  Efficient Thematic Analysis
Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis
He Zhang
Chuhao Wu
Jingyi Xie
Yao Lyu
Jie Cai
John M. Carroll
26
51
0
19 Sep 2023
Software Doping Analysis for Human Oversight
Software Doping Analysis for Human Oversight
Sebastian Biewer
Kevin Baum
Sarah Sterz
Holger Hermanns
Sven Hetmank
Markus Langer
Anne Lauber-Rönsberg
Franz Lehr
20
4
0
11 Aug 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
26
12
0
26 Jul 2023
AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
Q. V. Liao
J. Vaughan
36
158
0
02 Jun 2023
The Case Against Explainability
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILaw
ELM
21
1
0
20 May 2023
Certification Labels for Trustworthy AI: Insights From an Empirical
  Mixed-Method Study
Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study
Nicolas Scharowski
Michaela Benk
S. J. Kühne
Léane Wettstein
Florian Brühlmann
25
12
0
15 May 2023
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
  Study
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study
L. Herm
15
22
0
18 Apr 2023
A Systematic Literature Review of User Trust in AI-Enabled Systems: An
  HCI Perspective
A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective
T. A. Bach
Amna Khan
Harry P. Hallock
Gabriel Beltrao
Sonia C. Sousa
13
99
0
18 Apr 2023
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Nicolas Scharowski
S. Perrig
HILM
24
3
0
29 Mar 2023
Trust Explanations to Do What They Say
Trust Explanations to Do What They Say
N Natarajan
Reuben Binns
Jun Zhao
N. Shadbolt
15
2
0
14 Feb 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
19
96
0
04 Feb 2023
A Mental Model Based Theory of Trust
A Mental Model Based Theory of Trust
Z. Zahedi
S. Sreedharan
Subbarao Kambhampati
39
6
0
29 Jan 2023
Towards Reconciling Usability and Usefulness of Explainable AI
  Methodologies
Towards Reconciling Usability and Usefulness of Explainable AI Methodologies
Pradyumna Tambwekar
Matthew C. Gombolay
28
8
0
13 Jan 2023
The Design Principle of Blockchain: An Initiative for the SoK of SoKs
The Design Principle of Blockchain: An Initiative for the SoK of SoKs
Luyao Zhang
34
18
0
01 Jan 2023
Measuring an artificial intelligence agent's trust in humans using
  machine incentives
Measuring an artificial intelligence agent's trust in humans using machine incentives
Tim Johnson
Nick Obradovich
28
6
0
27 Dec 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
AI Ethics in Smart Healthcare
AI Ethics in Smart Healthcare
S. Pasricha
20
14
0
02 Nov 2022
Honest Students from Untrusted Teachers: Learning an Interpretable
  Question-Answering Pipeline from a Pretrained Language Model
Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model
Jacob Eisenstein
D. Andor
Bernd Bohnet
Michael Collins
David M. Mimno
LRM
189
24
0
05 Oct 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
38
45
0
23 Sep 2022
Trust Calibration as a Function of the Evolution of Uncertainty in
  Knowledge Generation: A Survey
Trust Calibration as a Function of the Evolution of Uncertainty in Knowledge Generation: A Survey
J. Boley
Maoyuan Sun
22
0
0
09 Sep 2022
Are we measuring trust correctly in explainability, interpretability,
  and transparency research?
Are we measuring trust correctly in explainability, interpretability, and transparency research?
Tim Miller
14
23
0
31 Aug 2022
"Inconsistent Performance": Understanding Concerns of Real-World Users
  on Smart Mobile Health Applications Through Analyzing App Reviews
"Inconsistent Performance": Understanding Concerns of Real-World Users on Smart Mobile Health Applications Through Analyzing App Reviews
Banafsheh Mohajeri
Jinghui Cheng
46
1
0
23 Aug 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
27
16
0
13 Jun 2022
Argumentative Explanations for Pattern-Based Text Classifiers
Argumentative Explanations for Pattern-Based Text Classifiers
Piyawat Lertvittayakumjorn
Francesca Toni
31
4
0
22 May 2022
Perspectives on Incorporating Expert Feedback into Model Updates
Perspectives on Incorporating Expert Feedback into Model Updates
Valerie Chen
Umang Bhatt
Hoda Heidari
Adrian Weller
Ameet Talwalkar
30
11
0
13 May 2022
"There Is Not Enough Information": On the Effects of Explanations on
  Perceptions of Informational Fairness and Trustworthiness in Automated
  Decision-Making
"There Is Not Enough Information": On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making
Jakob Schoeffer
Niklas Kuehl
Yvette Machowski
FaML
16
52
0
11 May 2022
Interactive Model Cards: A Human-Centered Approach to Model
  Documentation
Interactive Model Cards: A Human-Centered Approach to Model Documentation
Anamaria Crisan
Margaret Drouhard
Jesse Vig
Nazneen Rajani
HAI
25
87
0
05 May 2022
Designing for Responsible Trust in AI Systems: A Communication
  Perspective
Designing for Responsible Trust in AI Systems: A Communication Perspective
Q. V. Liao
S. Sundar
19
99
0
29 Apr 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
AAML
ELM
22
20
0
21 Mar 2022
Towards a Roadmap on Software Engineering for Responsible AI
Towards a Roadmap on Software Engineering for Responsible AI
Qinghua Lu
Liming Zhu
Xiwei Xu
Jon Whittle
Zhenchang Xing
23
53
0
09 Mar 2022
Trust in AI: Interpretability is not necessary or sufficient, while
  black-box interaction is necessary and sufficient
Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient
Max W. Shen
25
18
0
10 Feb 2022
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Sarah Wiegreffe
Jack Hessel
Swabha Swayamdipta
Mark O. Riedl
Yejin Choi
21
142
0
16 Dec 2021
Learning to run a power network with trust
Learning to run a power network with trust
Antoine Marot
Benjamin Donnot
Karim Chaouache
Ying-Ling Lu
Qiuhua Huang
Ramij-Raja Hossain
J. Cremer
23
30
0
21 Oct 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
117
355
0
04 Oct 2021
Causal Inference in Natural Language Processing: Estimation, Prediction,
  Interpretation and Beyond
Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond
Amir Feder
Katherine A. Keith
Emaad A. Manzoor
Reid Pryzant
Dhanya Sridhar
...
Roi Reichart
Margaret E. Roberts
Brandon M Stewart
Victor Veitch
Diyi Yang
CML
35
234
0
02 Sep 2021
Trusting RoBERTa over BERT: Insights from CheckListing the Natural
  Language Inference Task
Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task
Ishan Tarunesh
Somak Aditya
Monojit Choudhury
13
17
0
15 Jul 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
35
79
0
30 Apr 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
33
48
0
20 Mar 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
34
25
0
17 Feb 2021
Underspecification Presents Challenges for Credibility in Modern Machine
  Learning
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Alexander DÁmour
Katherine A. Heller
D. Moldovan
Ben Adlam
B. Alipanahi
...
Kellie Webster
Steve Yadlowsky
T. Yun
Xiaohua Zhai
D. Sculley
OffRL
48
669
0
06 Nov 2020
12
Next