ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.14526
  4. Cited By
Leveraging Explanations in Interactive Machine Learning: An Overview

Leveraging Explanations in Interactive Machine Learning: An Overview

29 July 2022
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
    XAI
    FAtt
    LRM
ArXivPDFHTML

Papers citing "Leveraging Explanations in Interactive Machine Learning: An Overview"

25 / 125 papers shown
Title
Teaching Categories to Human Learners with Visual Explanations
Teaching Categories to Human Learners with Visual Explanations
Oisin Mac Aodha
Shihan Su
Yuxin Chen
Pietro Perona
Yisong Yue
97
70
0
20 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
81
3,922
0
06 Feb 2018
Plan Explanations as Model Reconciliation -- An Empirical Study
Plan Explanations as Model Reconciliation -- An Empirical Study
Tathagata Chakraborti
S. Sreedharan
Sachin Grover
S. Kambhampati
54
47
0
03 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
83
241
0
02 Feb 2018
Snorkel: Rapid Training Data Creation with Weak Supervision
Snorkel: Rapid Training Data Creation with Weak Supervision
Alexander Ratner
Stephen H. Bach
Henry R. Ehrenberg
Jason Alan Fries
Sen Wu
Christopher Ré
62
1,021
0
28 Nov 2017
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
114
281
0
16 Nov 2017
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
79
683
0
02 Nov 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
73
2,332
0
01 Nov 2017
Human-in-the-loop Artificial Intelligence
Human-in-the-loop Artificial Intelligence
Fabio Massimo Zanzotto
51
266
0
23 Oct 2017
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
265
2,248
0
24 Jun 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
217
4,229
0
22 Jun 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
435
129,831
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
524
21,613
0
22 May 2017
Learning Certifiably Optimal Rule Lists for Categorical Data
Learning Certifiably Optimal Rule Lists for Categorical Data
E. Angelino
Nicholas Larus-Stone
Daniel Alabi
Margo Seltzer
Cynthia Rudin
73
195
0
06 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
134
2,854
0
14 Mar 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
103
585
0
10 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
115
5,920
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
209
19,796
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
577
16,828
0
16 Feb 2016
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning
  and Prototype Classification
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
Been Kim
Cynthia Rudin
J. Shah
50
321
0
03 Mar 2015
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Berk Ustun
Cynthia Rudin
58
352
0
15 Feb 2015
Neural Machine Translation by Jointly Learning to Align and Translate
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
383
27,205
0
01 Sep 2014
Interpreting Tree Ensembles with inTrees
Interpreting Tree Ensembles with inTrees
Houtao Deng
65
241
0
23 Aug 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
167
7,252
0
20 Dec 2013
How to Explain Individual Classification Decisions
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
104
1,098
0
06 Dec 2009
Previous
123