ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.08608
  4. Cited By
Towards A Rigorous Science of Interpretable Machine Learning

Towards A Rigorous Science of Interpretable Machine Learning

28 February 2017
Finale Doshi-Velez
Been Kim
    XAI
    FaML
ArXivPDFHTML

Papers citing "Towards A Rigorous Science of Interpretable Machine Learning"

50 / 403 papers shown
Title
Artificial Intelligence for Pediatric Ophthalmology
Artificial Intelligence for Pediatric Ophthalmology
J. Reid
Eric Eaton
16
60
0
06 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation
  and Attribution Summarizations
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
11
213
0
04 Apr 2019
VINE: Visualizing Statistical Interactions in Black Box Models
VINE: Visualizing Statistical Interactions in Black Box Models
M. Britton
FAtt
11
21
0
01 Apr 2019
Informed Machine Learning -- A Taxonomy and Survey of Integrating
  Knowledge into Learning Systems
Informed Machine Learning -- A Taxonomy and Survey of Integrating Knowledge into Learning Systems
Laura von Rueden
S. Mayer
Katharina Beckh
B. Georgiev
Sven Giesselbach
...
Rajkumar Ramamurthy
Michal Walczak
Jochen Garcke
Christian Bauckhage
Jannis Schuecker
34
626
0
29 Mar 2019
A Grounded Interaction Protocol for Explainable Artificial Intelligence
A Grounded Interaction Protocol for Explainable Artificial Intelligence
Prashan Madumal
Tim Miller
L. Sonenberg
F. Vetere
14
96
0
05 Mar 2019
Using Causal Analysis to Learn Specifications from Task Demonstrations
Using Causal Analysis to Learn Specifications from Task Demonstrations
Daniel Angelov
Yordan V. Hristov
S. Ramamoorthy
CML
21
19
0
04 Mar 2019
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of
  Task Delegability
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability
Brian Lubars
Chenhao Tan
14
73
0
08 Feb 2019
Towards Automatic Concept-based Explanations
Towards Automatic Concept-based Explanations
Amirata Ghorbani
James Wexler
James Zou
Been Kim
FAtt
LRM
22
19
0
07 Feb 2019
Fairwashing: the risk of rationalization
Fairwashing: the risk of rationalization
Ulrich Aivodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
FaML
17
142
0
28 Jan 2019
Explaining Explanations to Society
Explaining Explanations to Society
Leilani H. Gilpin
Cecilia Testart
Nathaniel Fruchter
Julius Adebayo
XAI
19
34
0
19 Jan 2019
Optimization Problems for Machine Learning: A Survey
Optimization Problems for Machine Learning: A Survey
Claudio Gambella
Bissan Ghaddar
Joe Naoum-Sawaya
AI4CE
28
178
0
16 Jan 2019
A multi-task deep learning model for the classification of Age-related
  Macular Degeneration
A multi-task deep learning model for the classification of Age-related Macular Degeneration
Qingyu Chen
Yifan Peng
T. Keenan
S. Dharssi
Elvira Agrón
W. Wong
E. Chew
Zhiyong Lu
BDL
MedIm
21
45
0
02 Dec 2018
Image Reconstruction with Predictive Filter Flow
Image Reconstruction with Predictive Filter Flow
Shu Kong
Charless C. Fowlkes
SupR
16
13
0
28 Nov 2018
Scalable agent alignment via reward modeling: a research direction
Scalable agent alignment via reward modeling: a research direction
Jan Leike
David M. Krueger
Tom Everitt
Miljan Martic
Vishal Maini
Shane Legg
28
392
0
19 Nov 2018
Secure Deep Learning Engineering: A Software Quality Assurance
  Perspective
Secure Deep Learning Engineering: A Software Quality Assurance Perspective
L. Ma
Felix Juefei Xu
Minhui Xue
Q. Hu
Sen Chen
Bo-wen Li
Yang Liu
Jianjun Zhao
Jianxiong Yin
Simon See
AAML
17
35
0
10 Oct 2018
What made you do this? Understanding black-box decisions with sufficient
  input subsets
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
35
77
0
09 Oct 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAtt
AAML
11
128
0
08 Oct 2018
Interpreting Layered Neural Networks via Hierarchical Modular
  Representation
Interpreting Layered Neural Networks via Hierarchical Modular Representation
C. Watanabe
8
19
0
03 Oct 2018
Extractive Adversarial Networks: High-Recall Explanations for
  Identifying Personal Attacks in Social Media Posts
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts
Samuel Carton
Qiaozhu Mei
Paul Resnick
FAtt
AAML
9
34
0
01 Sep 2018
An Operation Sequence Model for Explainable Neural Machine Translation
An Operation Sequence Model for Explainable Neural Machine Translation
Felix Stahlberg
Danielle Saunders
Bill Byrne
LRM
MILM
25
29
0
29 Aug 2018
Techniques for Interpretable Machine Learning
Techniques for Interpretable Machine Learning
Mengnan Du
Ninghao Liu
Xia Hu
FaML
22
1,071
0
31 Jul 2018
Automated Data Slicing for Model Validation:A Big data - AI Integration
  Approach
Automated Data Slicing for Model Validation:A Big data - AI Integration Approach
Yeounoh Chung
Tim Kraska
N. Polyzotis
Ki Hyun Tae
Steven Euijong Whang
17
129
0
16 Jul 2018
Model Agnostic Supervised Local Explanations
Model Agnostic Supervised Local Explanations
Gregory Plumb
Denali Molitor
Ameet Talwalkar
FAtt
LRM
MILM
14
196
0
09 Jul 2018
xGEMs: Generating Examplars to Explain Black-Box Models
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
17
40
0
22 Jun 2018
Learning Qualitatively Diverse and Interpretable Rules for
  Classification
Learning Qualitatively Diverse and Interpretable Rules for Classification
A. Ross
Weiwei Pan
Finale Doshi-Velez
16
13
0
22 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
21
164
0
20 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
13
932
0
20 Jun 2018
Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
14
82
0
19 Jun 2018
Instance-Level Explanations for Fraud Detection: A Case Study
Instance-Level Explanations for Fraud Detection: A Case Study
Dennis Collaris
L. M. Vink
J. V. Wijk
29
31
0
19 Jun 2018
Learning Kolmogorov Models for Binary Random Variables
Learning Kolmogorov Models for Binary Random Variables
H. Ghauch
Mikael Skoglund
H. S. Ghadikolaei
Carlo Fischione
A. H. Sayed
11
7
0
06 Jun 2018
Performance Metric Elicitation from Pairwise Classifier Comparisons
Performance Metric Elicitation from Pairwise Classifier Comparisons
G. Hiranandani
Shant Boodaghians
R. Mehta
Oluwasanmi Koyejo
11
14
0
05 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
35
1,840
0
31 May 2018
Human-in-the-Loop Interpretability Prior
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
26
120
0
29 May 2018
Local Rule-Based Explanations of Black Box Decision Systems
Local Rule-Based Explanations of Black Box Decision Systems
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
D. Pedreschi
Franco Turini
F. Giannotti
14
435
0
28 May 2018
Disentangling Controllable and Uncontrollable Factors of Variation by
  Interacting with the World
Disentangling Controllable and Uncontrollable Factors of Variation by Interacting with the World
Yoshihide Sawada
DRL
13
10
0
19 Apr 2018
Entanglement-guided architectures of machine learning by quantum tensor
  network
Entanglement-guided architectures of machine learning by quantum tensor network
Yuhan Liu
Xiao Zhang
M. Lewenstein
Shi-Ju Ran
18
32
0
24 Mar 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Constant-Time Predictive Distributions for Gaussian Processes
Constant-Time Predictive Distributions for Gaussian Processes
Geoff Pleiss
J. Gardner
Kilian Q. Weinberger
A. Wilson
17
94
0
16 Mar 2018
Structural Agnostic Modeling: Adversarial Learning of Causal Graphs
Structural Agnostic Modeling: Adversarial Learning of Causal Graphs
Diviyan Kalainathan
Olivier Goudet
Isabelle M Guyon
David Lopez-Paz
Michèle Sebag
CML
24
93
0
13 Mar 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust
  Deep Learning
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick D. McDaniel
OOD
AAML
6
502
0
13 Mar 2018
Adversarial Malware Binaries: Evading Deep Learning for Malware
  Detection in Executables
Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables
Bojan Kolosnjaji
Ambra Demontis
Battista Biggio
Davide Maiorca
Giorgio Giacinto
Claudia Eckert
Fabio Roli
AAML
14
315
0
12 Mar 2018
Explaining Black-box Android Malware Detection
Explaining Black-box Android Malware Detection
Marco Melis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAML
FAtt
9
43
0
09 Mar 2018
The Challenge of Crafting Intelligible Intelligence
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
15
241
0
09 Mar 2018
Teaching Categories to Human Learners with Visual Explanations
Teaching Categories to Human Learners with Visual Explanations
Oisin Mac Aodha
Shihan Su
Yuxin Chen
Pietro Perona
Yisong Yue
8
70
0
20 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
16
241
0
02 Feb 2018
Inverse Classification for Comparison-based Interpretability in Machine
  Learning
Inverse Classification for Comparison-based Interpretability in Machine Learning
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
11
100
0
22 Dec 2017
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
31
1,789
0
30 Nov 2017
A Formal Framework to Characterize Interpretability of Procedures
A Formal Framework to Characterize Interpretability of Procedures
Amit Dhurandhar
Vijay Iyengar
Ronny Luss
Karthikeyan Shanmugam
13
19
0
12 Jul 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
20
2,201
0
12 Jun 2017
Contextual Explanation Networks
Contextual Explanation Networks
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric P. Xing
CML
22
82
0
29 May 2017
Previous
123456789
Next