ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.03717
  4. Cited By
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations

Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations

10 March 2017
A. Ross
M. C. Hughes
Finale Doshi-Velez
    FAtt
ArXivPDFHTML

Papers citing "Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations"

15 / 115 papers shown
Title
Saliency Learning: Teaching the Model Where to Pay Attention
Saliency Learning: Teaching the Model Where to Pay Attention
Reza Ghaeini
Xiaoli Z. Fern
Hamed Shahbazi
Prasad Tadepalli
FAtt
XAI
21
30
0
22 Feb 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
21
1,416
0
14 Jan 2019
Multimodal Explanations by Predicting Counterfactuality in Videos
Multimodal Explanations by Predicting Counterfactuality in Videos
Atsushi Kanehira
Kentaro Takemoto
S. Inayoshi
Tatsuya Harada
18
35
0
04 Dec 2018
Interpretable Neuron Structuring with Graph Spectral Regularization
Interpretable Neuron Structuring with Graph Spectral Regularization
Alexander Tong
David van Dijk
Jay S. Stanley
Matthew Amodio
Kristina M. Yim
R. Muhle
J. Noonan
Guy Wolf
Smita Krishnaswamy
19
6
0
30 Sep 2018
Women also Snowboard: Overcoming Bias in Captioning Models (Extended
  Abstract)
Women also Snowboard: Overcoming Bias in Captioning Models (Extended Abstract)
Lisa Anne Hendricks
Kaylee Burns
Kate Saenko
Trevor Darrell
Anna Rohrbach
25
478
0
02 Jul 2018
Learning Qualitatively Diverse and Interpretable Rules for
  Classification
Learning Qualitatively Diverse and Interpretable Rules for Classification
A. Ross
Weiwei Pan
Finale Doshi-Velez
16
13
0
22 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
23
164
0
20 Jun 2018
Detecting and interpreting myocardial infarction using fully
  convolutional neural networks
Detecting and interpreting myocardial infarction using fully convolutional neural networks
Nils Strodthoff
C. Strodthoff
33
150
0
18 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
40
1,840
0
31 May 2018
Human-in-the-Loop Interpretability Prior
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
32
120
0
29 May 2018
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Hendrik Strobelt
Sebastian Gehrmann
M. Behrisch
Adam Perer
Hanspeter Pfister
Alexander M. Rush
VLM
HAI
28
239
0
25 Apr 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
16
241
0
02 Feb 2018
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
26
675
0
26 Nov 2017
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
20
281
0
16 Nov 2017
Human Understandable Explanation Extraction for Black-box Classification
  Models Based on Matrix Factorization
Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization
Jaedeok Kim
Ji-Hoon Seo
FAtt
13
8
0
18 Sep 2017
Previous
123