Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.03759
Cited By
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods
8 February 2022
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
AI4TS
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods"
18 / 18 papers shown
Title
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
AAML
100
1
0
24 Feb 2025
Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
Ian E. Nielsen
Dimah Dera
Ghulam Rasool
N. Bouaynaya
R. Ramachandran
FAtt
71
82
0
23 Jul 2021
How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice
T. Vermeire
Thibault Laugel
X. Renard
David Martens
Marcin Detyniecki
35
16
0
09 Jul 2021
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
93
81
0
09 Jun 2021
Sampling Permutations for Shapley Value Estimation
Rory Mitchell
Joshua N. Cooper
E. Frank
G. Holmes
62
120
0
25 Apr 2021
Impact of Legal Requirements on Explainability in Machine Learning
Adrien Bibal
Michael Lognoul
A. D. Streel
Benoit Frénay
ELM
AILaw
FaML
51
9
0
10 Jul 2020
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
169
604
0
16 Jun 2020
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
94
454
0
27 Jan 2019
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
152
1,970
0
08 Oct 2018
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
Shoaib Ahmed Siddiqui
Dominique Mercier
Mohsin Munir
Andreas Dengel
Sheraz Ahmed
FAtt
AI4TS
100
84
0
08 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
146
822
0
02 Feb 2018
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,090
0
22 May 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
193
6,027
0
04 Mar 2017
Not Just a Black Box: Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Shcherbina
A. Kundaje
FAtt
94
791
0
05 May 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,071
0
16 Feb 2016
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
254
4,681
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
603
15,907
0
12 Nov 2013
1