ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.11797
  4. Cited By
Training Characteristic Functions with Reinforcement Learning:
  XAI-methods play Connect Four

Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four

23 February 2022
S. Wäldchen
Felix Huber
Sebastian Pokutta
    FAtt
ArXivPDFHTML

Papers citing "Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four"

35 / 35 papers shown
Title
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps
  and Relevance Orderings
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings
Jan Macdonald
Mathieu Besançon
Sebastian Pokutta
49
12
0
15 Oct 2021
Quality Metrics for Transparent Machine Learning With and Without Humans
  In the Loop Are Not Correlated
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated
F. Biessmann
D. Refiano
38
10
0
01 Jul 2021
Counterfactual Explanations Can Be Manipulated
Counterfactual Explanations Can Be Manipulated
Dylan Slack
Sophie Hilgard
Himabindu Lakkaraju
Sameer Singh
62
137
0
04 Jun 2021
Adaptive Warm-Start MCTS in AlphaZero-like Deep Reinforcement Learning
Adaptive Warm-Start MCTS in AlphaZero-like Deep Reinforcement Learning
Hui Wang
Mike Preuss
Aske Plaat
AI4CE
27
9
0
13 May 2021
Sampling Permutations for Shapley Value Estimation
Sampling Permutations for Shapley Value Estimation
Rory Mitchell
Joshua N. Cooper
E. Frank
G. Holmes
50
119
0
25 Apr 2021
Evaluating Explanations: How much do explanations from the teacher aid
  students?
Evaluating Explanations: How much do explanations from the teacher aid students?
Danish Pruthi
Rachit Bansal
Bhuwan Dhingra
Livio Baldini Soares
Michael Collins
Zachary Chase Lipton
Graham Neubig
William W. Cohen
FAtt
XAI
46
109
0
01 Dec 2020
Deep Neural Network Training with Frank-Wolfe
Deep Neural Network Training with Frank-Wolfe
Sebastian Pokutta
Christoph Spiegel
Max Zimmer
47
27
0
14 Oct 2020
Fairwashing Explanations with Off-Manifold Detergent
Fairwashing Explanations with Off-Manifold Detergent
Christopher J. Anders
Plamen Pasliev
Ann-Kathrin Dombrowski
K. Müller
Pan Kessel
FAtt
FaML
42
97
0
20 Jul 2020
Shapley explainability on the data manifold
Shapley explainability on the data manifold
Christopher Frye
Damien de Mijolla
T. Begley
Laurence Cowton
Megan Stanley
Ilya Feige
FAtt
TDI
33
99
0
01 Jun 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
43
132
0
20 Dec 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
68
817
0
06 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
116
6,251
0
22 Oct 2019
The many Shapley values for model explanation
The many Shapley values for model explanation
Mukund Sundararajan
A. Najmi
TDI
FAtt
58
632
0
22 Aug 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
78
330
0
19 Jun 2019
A Rate-Distortion Framework for Explaining Neural Network Decisions
A Rate-Distortion Framework for Explaining Neural Network Decisions
Jan Macdonald
S. Wäldchen
Sascha Hauch
Gitta Kutyniok
44
40
0
27 May 2019
Towards Efficient Data Valuation Based on the Shapley Value
Towards Efficient Data Valuation Based on the Shapley Value
R. Jia
David Dao
Wei Ping
F. Hubis
Nicholas Hynes
Nezihe Merve Gürel
Yue Liu
Ce Zhang
D. Song
C. Spanos
TDI
63
412
0
27 Feb 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
84
1,009
0
26 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model
  Manipulation
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
93
202
0
06 Feb 2019
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
123
1,963
0
08 Oct 2018
A Theoretical Explanation for Perplexing Behaviors of
  Backpropagation-based Visualizations
A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations
Weili Nie
Yang Zhang
Ankit B. Patel
FAtt
120
151
0
18 May 2018
A Symbolic Approach to Explaining Bayesian Network Classifiers
A Symbolic Approach to Explaining Bayesian Network Classifiers
Andy Shih
Arthur Choi
Adnan Darwiche
FAtt
64
243
0
09 May 2018
What do we need to build explainable AI systems for the medical domain?
What do we need to build explainable AI systems for the medical domain?
Andreas Holzinger
Chris Biemann
C. Pattichis
D. Kell
66
689
0
28 Dec 2017
Proximal Policy Optimization Algorithms
Proximal Policy Optimization Algorithms
John Schulman
Filip Wolski
Prafulla Dhariwal
Alec Radford
Oleg Klimov
OffRL
446
18,931
0
20 Jul 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
199
2,221
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.0K
21,815
0
22 May 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
74
1,517
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
184
3,865
0
10 Apr 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
376
3,776
0
28 Feb 2017
Deep Reinforcement Learning: An Overview
Deep Reinforcement Learning: An Overview
Yuxi Li
OffRL
VLM
155
1,530
0
25 Jan 2017
The Latin American Giant Observatory: a successful collaboration in
  Latin America based on Cosmic Rays and computer science domains
The Latin American Giant Observatory: a successful collaboration in Latin America based on Cosmic Rays and computer science domains
Hernán Asorey
R. Mayo-García
L. Núñez
M. Pascual
A. J. Rubio-Montero
M. Suárez-Durán
L. A. Torres-Niño
81
5
0
30 May 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.1K
16,931
0
16 Feb 2016
Evaluating the visualization of what a Deep Neural Network has learned
Evaluating the visualization of what a Deep Neural Network has learned
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
XAI
132
1,191
0
21 Sep 2015
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
236
4,665
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
297
7,289
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
546
15,874
0
12 Nov 2013
1