ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.01955
  4. Cited By
Explanations of model predictions with live and breakDown packages

Explanations of model predictions with live and breakDown packages

5 April 2018
M. Staniak
P. Biecek
    FAtt
ArXivPDFHTML

Papers citing "Explanations of model predictions with live and breakDown packages"

9 / 9 papers shown
Title
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
Sicong Cao
Xiaobing Sun
Ratnadira Widyasari
David Lo
Xiaoxue Wu
...
Jiale Zhang
Bin Li
Wei Liu
Di Wu
Yixin Chen
84
7
0
26 Jan 2024
Explainability in Deep Reinforcement Learning
Explainability in Deep Reinforcement Learning
Alexandre Heuillet
Fabien Couthouis
Natalia Díaz Rodríguez
XAI
118
281
0
15 Aug 2020
archivist: An R Package for Managing, Recording and Restoring Data
  Analysis Results
archivist: An R Package for Managing, Recording and Restoring Data Analysis Results
P. Biecek
M. Kosinski
KELM
18
18
0
27 Jun 2017
MAGIX: Model Agnostic Globally Interpretable Explanations
MAGIX: Model Agnostic Globally Interpretable Explanations
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
FAtt
91
41
0
22 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
688
21,613
0
22 May 2017
Nothing Else Matters: Model-Agnostic Explanations By Identifying
  Prediction Invariance
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
39
64
0
17 Nov 2016
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
66
836
0
16 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
746
16,828
0
16 Feb 2016
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
389
15,825
0
12 Nov 2013
1