Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.07952
Cited By
TimeREISE: Time-series Randomized Evolving Input Sample Explanation
16 February 2022
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
AI4TS
Re-assign community
ArXiv
PDF
HTML
Papers citing
"TimeREISE: Time-series Randomized Evolving Input Sample Explanation"
17 / 17 papers shown
Title
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
AI4TS
40
11
0
08 Feb 2022
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
62
81
0
09 Jun 2021
Impact of Legal Requirements on Explainability in Machine Learning
Adrien Bibal
Michael Lognoul
A. D. Streel
Benoit Frénay
ELM
AILaw
FaML
46
9
0
10 Jul 2020
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
150
601
0
16 Jun 2020
InceptionTime: Finding AlexNet for Time Series Classification
Hassan Ismail Fawaz
Benjamin Lucas
Germain Forestier
Charlotte Pelletier
Daniel F. Schmidt
J. Weber
Geoffrey I. Webb
L. Idoumghar
Pierre-Alain Muller
Franccois Petitjean
AI4TS
152
1,096
0
11 Sep 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
58
451
0
27 Jan 2019
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
123
1,963
0
08 Oct 2018
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
70
526
0
21 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
172
1,169
0
19 Jun 2018
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
Shoaib Ahmed Siddiqui
Dominique Mercier
Mohsin Munir
Andreas Dengel
Sheraz Ahmed
FAtt
AI4TS
61
84
0
08 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
118
819
0
02 Feb 2018
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
74
1,517
0
11 Apr 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
175
5,968
0
04 Mar 2017
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
Sergey Ioffe
Vincent Vanhoucke
Alexander A. Alemi
359
14,223
0
23 Feb 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
995
16,931
0
16 Feb 2016
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
228
4,665
0
21 Dec 2014
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
508
15,861
0
12 Nov 2013
1