ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.09342
  4. Cited By
Explaining Classifications to Non Experts: An XAI User Study of Post Hoc
  Explanations for a Classifier When People Lack Expertise

Explaining Classifications to Non Experts: An XAI User Study of Post Hoc Explanations for a Classifier When People Lack Expertise

19 December 2022
Courtney Ford
Markt. Keane
ArXivPDFHTML

Papers citing "Explaining Classifications to Non Experts: An XAI User Study of Post Hoc Explanations for a Classifier When People Lack Expertise"

8 / 8 papers shown
Title
Features of Explainability: How users understand counterfactual and
  causal explanations for categorical and continuous features in XAI
Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI
Greta Warren
Mark T. Keane
R. Byrne
CML
36
22
0
21 Apr 2022
If Only We Had Better Counterfactual Explanations: Five Key Deficits to
  Rectify in the Evaluation of Counterfactual XAI Techniques
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
43
146
0
26 Feb 2021
The Role of Domain Expertise in User Trust and the Impact of First
  Impressions with Intelligent Systems
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems
Mahsan Nourani
J. King
Eric D. Ragan
51
99
0
20 Aug 2020
An Ensemble of Simple Convolutional Neural Network Models for MNIST
  Digit Recognition
An Ensemble of Simple Convolutional Neural Network Models for MNIST Digit Recognition
Sanghyeon An
Min Jun Lee
Sanglee Park
H. Yang
Jungmin So
37
79
0
12 Aug 2020
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
133
679
0
26 Nov 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
405
21,459
0
22 May 2017
Improving Human-Machine Cooperative Visual Search With Soft Highlighting
Improving Human-Machine Cooperative Visual Search With Soft Highlighting
R. T. Kneusel
Michael C. Mozer
42
26
0
24 Dec 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
478
16,765
0
16 Feb 2016
1