ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.08540
  4. Cited By
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

17 February 2021
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
    FAtt
ArXivPDFHTML

Papers citing "Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs"

9 / 9 papers shown
Title
What Does Evaluation of Explainable Artificial Intelligence Actually
  Tell Us? A Case for Compositional and Contextual Validation of XAI Building
  Blocks
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Kacper Sokol
Julia E. Vogt
37
11
0
19 Mar 2024
Effective Human-AI Teams via Learned Natural Language Rules and
  Onboarding
Effective Human-AI Teams via Learned Natural Language Rules and Onboarding
Hussein Mozannar
Jimin J Lee
Dennis L. Wei
P. Sattigeri
Subhro Das
David Sontag
41
11
0
02 Nov 2023
Human-AI collaboration is not very collaborative yet: A taxonomy of
  interaction patterns in AI-assisted decision making from a systematic review
Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review
Catalina Gomez
Sue Min Cho
Shichang Ke
Chien-Ming Huang
Mathias Unberath
32
1
0
30 Oct 2023
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
36
92
0
20 Oct 2022
Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts
Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts
Ashley Suh
G. Appleby
Erik W. Anderson
Luca A. Finelli
Remco Chang
Dylan Cashman
29
8
0
11 May 2022
Teaching Humans When To Defer to a Classifier via Exemplars
Teaching Humans When To Defer to a Classifier via Exemplars
Hussein Mozannar
Arvindmani Satyanarayan
David Sontag
28
43
0
22 Nov 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
119
0
21 Jan 2021
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
249
426
0
15 Oct 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1