ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07470
  4. Cited By
Contrastive Explanations with Local Foil Trees

Contrastive Explanations with Local Foil Trees

19 June 2018
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
    FAtt
ArXivPDFHTML

Papers citing "Contrastive Explanations with Local Foil Trees"

24 / 24 papers shown
Title
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Zana Buçinca
S. Swaroop
Amanda E. Paluch
Finale Doshi-Velez
Krzysztof Z. Gajos
69
2
0
05 Oct 2024
SurvLIME: A method for explaining machine learning survival models
SurvLIME: A method for explaining machine learning survival models
M. Kovalev
Lev V. Utkin
E. Kasimov
154
90
0
18 Mar 2020
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
84
587
0
21 Feb 2018
A comparative study of fairness-enhancing interventions in machine
  learning
A comparative study of fairness-enhancing interventions in machine learning
Sorelle A. Friedler
C. Scheidegger
Suresh Venkatasubramanian
Sonam Choudhary
Evan P. Hamilton
Derek Roth
FaML
83
639
0
13 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
69
3,922
0
06 Feb 2018
Interpretable Policies for Reinforcement Learning by Genetic Programming
Interpretable Policies for Reinforcement Learning by Genetic Programming
D. Hein
Steffen Udluft
Thomas Runkler
OffRL
20
131
0
12 Dec 2017
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to
  Stop Worrying and Love the Social and Behavioural Sciences
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences
Tim Miller
Piers Howe
L. Sonenberg
AI4TS
SyDa
33
373
0
02 Dec 2017
The Promise and Peril of Human Evaluation for Model Interpretability
Bernease Herman
37
144
0
20 Nov 2017
MAGIX: Model Agnostic Globally Interpretable Explanations
MAGIX: Model Agnostic Globally Interpretable Explanations
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
FAtt
81
41
0
22 Jun 2017
Streaming Weak Submodularity: Interpreting Neural Networks on the Fly
Streaming Weak Submodularity: Interpreting Neural Networks on the Fly
Ethan R. Elenberg
A. Dimakis
Moran Feldman
Amin Karbasi
32
89
0
08 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
67
5,920
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
334
3,742
0
28 Feb 2017
Rationalization: A Neural Machine Translation Approach to Generating
  Natural Language Explanations
Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
Upol Ehsan
Brent Harrison
Larry Chan
Mark O. Riedl
85
218
0
25 Feb 2017
An unexpected unity among methods for interpreting model predictions
An unexpected unity among methods for interpreting model predictions
Scott M. Lundberg
Su-In Lee
FAtt
33
110
0
22 Nov 2016
TreeView: Peeking into Deep Neural Networks Via Feature-Space
  Partitioning
TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning
Jayaraman J. Thiagarajan
B. Kailkhura
P. Sattigeri
Karthikeyan N. Ramamurthy
35
38
0
22 Nov 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
179
19,796
0
07 Oct 2016
Semantics derived automatically from language corpora contain human-like
  biases
Semantics derived automatically from language corpora contain human-like biases
Aylin Caliskan
J. Bryson
Arvind Narayanan
114
2,645
0
25 Aug 2016
Top-down Neural Attention by Excitation Backprop
Top-down Neural Attention by Excitation Backprop
Jianming Zhang
Zhe Lin
Jonathan Brandt
Xiaohui Shen
Stan Sclaroff
40
945
0
01 Aug 2016
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
77
807
0
13 Jun 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
82
3,672
0
10 Jun 2016
The Latin American Giant Observatory: a successful collaboration in
  Latin America based on Cosmic Rays and computer science domains
The Latin American Giant Observatory: a successful collaboration in Latin America based on Cosmic Rays and computer science domains
Hernán Asorey
R. Mayo-García
L. Núñez
M. Pascual
A. J. Rubio-Montero
M. Suárez-Durán
L. A. Torres-Niño
50
690
0
30 May 2016
Generating Visual Explanations
Generating Visual Explanations
Lisa Anne Hendricks
Zeynep Akata
Marcus Rohrbach
Jeff Donahue
Bernt Schiele
Trevor Darrell
VLM
FAtt
58
620
0
28 Mar 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
392
16,765
0
16 Feb 2016
Explaining NonLinear Classification Decisions with Deep Taylor
  Decomposition
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition
G. Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus-Robert Muller
FAtt
43
730
0
08 Dec 2015
1