ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.14526
  4. Cited By
Leveraging Explanations in Interactive Machine Learning: An Overview

Leveraging Explanations in Interactive Machine Learning: An Overview

29 July 2022
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
    XAI
    FAtt
    LRM
ArXivPDFHTML

Papers citing "Leveraging Explanations in Interactive Machine Learning: An Overview"

50 / 125 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
356
1
0
28 Apr 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
74
3
0
16 Feb 2025
Learning To Guide Human Decision Makers With Vision-Language Models
Learning To Guide Human Decision Makers With Vision-Language Models
Debodeep Banerjee
Stefano Teso
Burcu Sayin
Andrea Passerini
42
1
0
25 Mar 2024
Concept-level Debugging of Part-Prototype Networks
Concept-level Debugging of Part-Prototype Networks
A. Bontempelli
Stefano Teso
Katya Tentori
Fausto Giunchiglia
Andrea Passerini
48
53
0
31 May 2022
CAIPI in Practice: Towards Explainable Interactive Medical Image
  Classification
CAIPI in Practice: Towards Explainable Interactive Medical Image Classification
E. Slany
Yannik Ott
Stephan Scheele
Jan Paulus
Ute Schmid
83
8
0
06 Apr 2022
User Driven Model Adjustment via Boolean Rule Explanations
User Driven Model Adjustment via Boolean Rule Explanations
Elizabeth M. Daly
Massimiliano Mattetti
Öznur Alkan
Rahul Nair
AAML
11
11
0
28 Mar 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
55
27
0
25 Feb 2022
Right for the Right Latent Factors: Debiasing Generative Models via
  Disentanglement
Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement
Xiaoting Shao
Karl Stelzner
Kristian Kersting
CML
DRL
57
3
0
01 Feb 2022
On the Robustness of Sparse Counterfactual Explanations to Adverse
  Perturbations
On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations
M. Virgolin
Saverio Fracaros
CML
52
36
0
22 Jan 2022
FROTE: Feedback Rule-Driven Oversampling for Editing Models
FROTE: Feedback Rule-Driven Oversampling for Editing Models
Öznur Alkan
Dennis L. Wei
Massimiliano Mattetti
Rahul Nair
Elizabeth M. Daly
Diptikalyan Saha
18
8
0
04 Jan 2022
Interactive Disentanglement: Learning Concepts by Interacting with their
  Prototype Representations
Interactive Disentanglement: Learning Concepts by Interacting with their Prototype Representations
Wolfgang Stammer
Marius Memmel
P. Schramowski
Kristian Kersting
112
26
0
04 Dec 2021
Editing a classifier by rewriting its prediction rules
Editing a classifier by rewriting its prediction rules
Shibani Santurkar
Dimitris Tsipras
Mahalaxmi Elango
David Bau
Antonio Torralba
Aleksander Madry
KELM
213
89
0
02 Dec 2021
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Q. V. Liao
R. Varshney
38
226
0
20 Oct 2021
Learning from Ambiguous Demonstrations with Self-Explanation Guided
  Reinforcement Learning
Learning from Ambiguous Demonstrations with Self-Explanation Guided Reinforcement Learning
Yantian Zha
L. Guan
Subbarao Kambhampati
36
5
0
11 Oct 2021
Explanation as a process: user-centric construction of multi-level and
  multi-modal explanations
Explanation as a process: user-centric construction of multi-level and multi-modal explanations
Bettina Finzel
David E. Tafler
Stephan Scheele
Ute Schmid
40
10
0
07 Oct 2021
A Survey on Cost Types, Interaction Schemes, and Annotator Performance
  Models in Selection Algorithms for Active Learning in Classification
A Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classification
M. Herde
Denis Huseljic
Bernhard Sick
A. Calma
46
25
0
23 Sep 2021
Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable
  and Advisable AI Systems
Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems
Subbarao Kambhampati
S. Sreedharan
Mudit Verma
Yantian Zha
L. Guan
71
47
0
21 Sep 2021
Promises and Pitfalls of Black-Box Concept Learning Models
Promises and Pitfalls of Black-Box Concept Learning Models
Anita Mahinpei
Justin Clark
Isaac Lage
Finale Doshi-Velez
Weiwei Pan
60
92
0
24 Jun 2021
Interactive Label Cleaning with Example-based Explanations
Interactive Label Cleaning with Example-based Explanations
Stefano Teso
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
48
45
0
07 Jun 2021
Causal Abstractions of Neural Networks
Causal Abstractions of Neural Networks
Atticus Geiger
Hanson Lu
Thomas Icard
Christopher Potts
NAI
CML
46
234
0
06 Jun 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
64
28
0
21 May 2021
Neuro-Symbolic Artificial Intelligence: Current Trends
Neuro-Symbolic Artificial Intelligence: Current Trends
Md Kamruzzaman Sarker
Lu Zhou
Aaron Eberhart
Pascal Hitzler
NAI
39
88
0
11 May 2021
Do Concept Bottleneck Models Learn as Intended?
Do Concept Bottleneck Models Learn as Intended?
Andrei Margeloiu
Matthew Ashman
Umang Bhatt
Yanzhi Chen
M. Jamnik
Adrian Weller
SLR
27
93
0
10 May 2021
This Looks Like That... Does it? Shortcomings of Latent Space Prototype
  Interpretability in Deep Networks
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
34
63
0
05 May 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
84
79
0
30 Apr 2021
IAIA-BL: A Case-based Interpretable Deep Learning Model for
  Classification of Mass Lesions in Digital Mammography
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography
A. Barnett
F. Schwartz
Chaofan Tao
Chaofan Chen
Yinhao Ren
J. Lo
Cynthia Rudin
43
136
0
23 Mar 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
112
656
0
20 Mar 2021
Evaluating Robustness of Counterfactual Explanations
Evaluating Robustness of Counterfactual Explanations
André Artelt
Valerie Vaquet
Riza Velioglu
Fabian Hinder
Johannes Brinkrolf
M. Schilling
Barbara Hammer
69
46
0
03 Mar 2021
Towards Causal Representation Learning
Towards Causal Representation Learning
Bernhard Schölkopf
Francesco Locatello
Stefan Bauer
Nan Rosemary Ke
Nal Kalchbrenner
Anirudh Goyal
Yoshua Bengio
OOD
CML
AI4CE
87
320
0
22 Feb 2021
When Can Models Learn From Explanations? A Formal Framework for
  Understanding the Roles of Explanation Data
When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
Peter Hase
Joey Tianyi Zhou
XAI
63
87
0
03 Feb 2021
GLocalX -- From Local to Global Explanations of Black Box AI Models
GLocalX -- From Local to Global Explanations of Black Box AI Models
Mattia Setzu
Riccardo Guidotti
A. Monreale
Franco Turini
D. Pedreschi
F. Giannotti
37
119
0
19 Jan 2021
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and
  Improving Models
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models
Tongshuang Wu
Marco Tulio Ribeiro
Jeffrey Heer
Daniel S. Weld
74
246
0
01 Jan 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Joey Tianyi Zhou
Caiming Xiong
TDI
69
107
0
31 Dec 2020
Learning Interpretable Concept-Based Models with Human Feedback
Learning Interpretable Concept-Based Models with Human Feedback
Isaac Lage
Finale Doshi-Velez
30
24
0
04 Dec 2020
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
114
263
0
03 Dec 2020
ProtoPShare: Prototype Sharing for Interpretable Image Classification
  and Similarity Discovery
ProtoPShare: Prototype Sharing for Interpretable Image Classification and Similarity Discovery
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
26
112
0
29 Nov 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
  Interacting with their Explanations
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
56
107
0
25 Nov 2020
Debugging Tests for Model Explanations
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
34
179
0
10 Nov 2020
This Looks Like That, Because ... Explaining Prototypes for
  Interpretable Image Recognition
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Meike Nauta
Annemarie Jutte
Jesper C. Provoost
C. Seifert
FAtt
39
65
0
05 Nov 2020
Learning in the Wild with Incremental Skeptical Gaussian Processes
Learning in the Wild with Incremental Skeptical Gaussian Processes
Andrea Bontempelli
Stefano Teso
Fausto Giunchiglia
Andrea Passerini
14
20
0
02 Nov 2020
On Explaining Decision Trees
On Explaining Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
36
86
0
21 Oct 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
66
177
0
12 Oct 2020
FIND: Human-in-the-Loop Debugging Deep Text Classifiers
FIND: Human-in-the-Loop Debugging Deep Text Classifiers
Piyawat Lertvittayakumjorn
Lucia Specia
Francesca Toni
16
54
0
10 Oct 2020
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
  Sufficient Subsets
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
25
23
0
23 Sep 2020
ALICE: Active Learning with Contrastive Natural Language Explanations
ALICE: Active Learning with Contrastive Natural Language Explanations
Weixin Liang
James Zou
Zhou Yu
VLM
51
50
0
22 Sep 2020
Machine Guides, Human Supervises: Interactive Learning with Global
  Explanations
Machine Guides, Human Supervises: Interactive Learning with Global Explanations
Teodora Popordanoska
Mohit Kumar
Stefano Teso
88
21
0
21 Sep 2020
Principles and Practice of Explainable Machine Learning
Principles and Practice of Explainable Machine Learning
Vaishak Belle
I. Papantonis
FaML
43
440
0
18 Sep 2020
On the Tractability of SHAP Explanations
On the Tractability of SHAP Explanations
Guy Van den Broeck
A. Lykov
Maximilian Schleich
Dan Suciu
FAtt
TDI
45
266
0
18 Sep 2020
Beneficial and Harmful Explanatory Machine Learning
Beneficial and Harmful Explanatory Machine Learning
L. Ai
Stephen Muggleton
Céline Hocquette
Mark Gromowski
Ute Schmid
27
30
0
09 Sep 2020
Soliciting Human-in-the-Loop User Feedback for Interactive Machine
  Learning Reduces User Trust and Impressions of Model Accuracy
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
Donald R. Honeycutt
Mahsan Nourani
Eric D. Ragan
HAI
56
61
0
28 Aug 2020
123
Next