ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.03759
  4. Cited By
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution
  Methods

Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods

8 February 2022
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
    AI4TS
ArXiv (abs)PDFHTML

Papers citing "Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods"

18 / 18 papers shown
Title
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
AAML
100
1
0
24 Feb 2025
Robust Explainability: A Tutorial on Gradient-Based Attribution Methods
  for Deep Neural Networks
Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
Ian E. Nielsen
Dimah Dera
Ghulam Rasool
N. Bouaynaya
R. Ramachandran
FAtt
71
82
0
23 Jul 2021
How to choose an Explainability Method? Towards a Methodical
  Implementation of XAI in Practice
How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice
T. Vermeire
Thibault Laugel
X. Renard
David Martens
Marcin Detyniecki
35
16
0
09 Jul 2021
Explaining Time Series Predictions with Dynamic Masks
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAttAI4TS
93
81
0
09 Jun 2021
Sampling Permutations for Shapley Value Estimation
Sampling Permutations for Shapley Value Estimation
Rory Mitchell
Joshua N. Cooper
E. Frank
G. Holmes
62
120
0
25 Apr 2021
Impact of Legal Requirements on Explainability in Machine Learning
Impact of Legal Requirements on Explainability in Machine Learning
Adrien Bibal
Michael Lognoul
A. D. Streel
Benoit Frénay
ELMAILawFaML
51
9
0
10 Jul 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
169
604
0
16 Jun 2020
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
94
455
0
27 Jan 2019
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
152
1,972
0
08 Oct 2018
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
Shoaib Ahmed Siddiqui
Dominique Mercier
Mohsin Munir
Andreas Dengel
Sheraz Ahmed
FAttAI4TS
100
84
0
08 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaMLHAI
146
822
0
02 Feb 2018
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,090
0
22 May 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,027
0
04 Mar 2017
Not Just a Black Box: Learning Important Features Through Propagating
  Activation Differences
Not Just a Black Box: Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Shcherbina
A. Kundaje
FAtt
94
791
0
05 May 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
254
4,683
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAttSSL
605
15,907
0
12 Nov 2013
1