ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.03433
  4. Cited By
Responsibility: An Example-based Explainable AI approach via Training
  Process Inspection

Responsibility: An Example-based Explainable AI approach via Training Process Inspection

7 September 2022
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
    XAI
ArXivPDFHTML

Papers citing "Responsibility: An Example-based Explainable AI approach via Training Process Inspection"

3 / 3 papers shown
Title
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
How explainable AI affects human performance: A systematic review of the
  behavioural consequences of saliency maps
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
HAI
45
6
0
03 Apr 2024
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
254
3,684
0
28 Feb 2017
1