ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.12941
  4. Cited By
iSee: Advancing Multi-Shot Explainable AI Using Case-based
  Recommendations

iSee: Advancing Multi-Shot Explainable AI Using Case-based Recommendations

23 August 2024
A. Wijekoon
Nirmalie Wiratunga
D. Corsar
Kyle Martin
Ikechukwu Nkisi-Orji
Chamath Palihawadana
Marta Caro-Martínez
Belén Díaz-Agudo
Derek Bridge
A. Liret
ArXiv (abs)PDFHTML

Papers citing "iSee: Advancing Multi-Shot Explainable AI Using Case-based Recommendations"

7 / 7 papers shown
Title
XEQ Scale for Evaluating XAI Experience Quality
XEQ Scale for Evaluating XAI Experience Quality
A. Wijekoon
Nirmalie Wiratunga
D. Corsar
Kyle Martin
Ikechukwu Nkisi-Orji
Belén Díaz-Agudo
Derek Bridge
174
2
0
20 Jan 2025
This changes to that : Combining causal and non-causal explanations to
  generate disease progression in capsule endoscopy
This changes to that : Combining causal and non-causal explanations to generate disease progression in capsule endoscopy
Anuja Vats
A. Mohammed
Marius Pedersen
Nirmalie Wiratunga
MedIm
48
9
0
05 Dec 2022
Behaviour Trees for Creating Conversational Explanation Experiences
Behaviour Trees for Creating Conversational Explanation Experiences
A. Wijekoon
D. Corsar
Nirmalie Wiratunga
49
3
0
11 Nov 2022
A Few Good Counterfactuals: Generating Interpretable, Plausible and
  Diverse Counterfactual Explanations
A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations
Barry Smyth
Mark T. Keane
CML
73
27
0
22 Jan 2021
Captum: A unified and generic model interpretability library for PyTorch
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
144
846
0
16 Sep 2020
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI
  Explainability Techniques
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
67
393
0
06 Sep 2019
Deep Learning for Case-Based Reasoning through Prototypes: A Neural
  Network that Explains Its Predictions
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Oscar Li
Hao Liu
Chaofan Chen
Cynthia Rudin
178
591
0
13 Oct 2017
1