ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.05515
  4. Cited By
Keep Your Friends Close and Your Counterfactuals Closer: Improved
  Learning From Closest Rather Than Plausible Counterfactual Explanations in an
  Abstract Setting

Keep Your Friends Close and Your Counterfactuals Closer: Improved Learning From Closest Rather Than Plausible Counterfactual Explanations in an Abstract Setting

11 May 2022
Ulrike Kuhl
André Artelt
Barbara Hammer
ArXivPDFHTML

Papers citing "Keep Your Friends Close and Your Counterfactuals Closer: Improved Learning From Closest Rather Than Plausible Counterfactual Explanations in an Abstract Setting"

14 / 14 papers shown
Title
Reassessing Evaluation Functions in Algorithmic Recourse: An Empirical
  Study from a Human-Centered Perspective
Reassessing Evaluation Functions in Algorithmic Recourse: An Empirical Study from a Human-Centered Perspective
T. Tominaga
Naomi Yamashita
Takeshi Kurashima
21
1
0
23 May 2024
Generating Likely Counterfactuals Using Sum-Product Networks
Generating Likely Counterfactuals Using Sum-Product Networks
Jiri Nemecek
Tomás Pevný
Jakub Marecek
TPM
76
0
0
25 Jan 2024
For Better or Worse: The Impact of Counterfactual Explanations'
  Directionality on User Behavior in xAI
For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAI
Ulrike Kuhl
André Artelt
Barbara Hammer
13
4
0
13 Jun 2023
Explaining Groups of Instances Counterfactually for XAI: A Use Case,
  Algorithm and User Study for Group-Counterfactuals
Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals
Greta Warren
Markt. Keane
Christophe Guéret
Eoin Delaney
26
13
0
16 Mar 2023
Even if Explanations: Prior Work, Desiderata & Benchmarks for
  Semi-Factual XAI
Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI
Saugat Aryal
Markt. Keane
28
21
0
27 Jan 2023
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
35
15
0
16 Dec 2022
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
39
92
0
20 Oct 2022
"Even if ..." -- Diverse Semifactual Explanations of Reject
"Even if ..." -- Diverse Semifactual Explanations of Reject
André Artelt
Barbara Hammer
33
12
0
05 Jul 2022
Let's Go to the Alien Zoo: Introducing an Experimental Framework to
  Study Usability of Counterfactual Explanations for Machine Learning
Let's Go to the Alien Zoo: Introducing an Experimental Framework to Study Usability of Counterfactual Explanations for Machine Learning
Ulrike Kuhl
André Artelt
Barbara Hammer
38
18
0
06 May 2022
A Few Good Counterfactuals: Generating Interpretable, Plausible and
  Diverse Counterfactual Explanations
A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations
Barry Smyth
Mark T. Keane
CML
37
26
0
22 Jan 2021
GeCo: Quality Counterfactual Explanations in Real Time
GeCo: Quality Counterfactual Explanations in Real Time
Maximilian Schleich
Zixuan Geng
Yihong Zhang
D. Suciu
46
61
0
05 Jan 2021
Counterfactual Explanations and Algorithmic Recourses for Machine
  Learning: A Review
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
26
162
0
20 Oct 2020
Issues with post-hoc counterfactual explanations: a discussion
Issues with post-hoc counterfactual explanations: a discussion
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
CML
107
44
0
11 Jun 2019
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1