ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.06175
  4. Cited By
TSXplain: Demystification of DNN Decisions for Time-Series using Natural
  Language and Statistical Features

TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features

15 May 2019
Mohsin Munir
Shoaib Ahmed Siddiqui
Ferdinand Küsters
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
    AI4TS
ArXiv (abs)PDFHTML

Papers citing "TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features"

13 / 13 papers shown
Title
Deep Sparse Coding for Non-Intrusive Load Monitoring
Deep Sparse Coding for Non-Intrusive Load Monitoring
Shikha Singh
A. Majumdar
58
126
0
11 Dec 2019
Interpretable Convolutional Neural Networks via Feedforward Design
Interpretable Convolutional Neural Networks via Feedforward Design
C.-C. Jay Kuo
Min Zhang
Siyang Li
Jiali Duan
Yueru Chen
64
157
0
05 Oct 2018
Textual Explanations for Self-Driving Vehicles
Textual Explanations for Self-Driving Vehicles
Jinkyu Kim
Anna Rohrbach
Trevor Darrell
John F. Canny
Zeynep Akata
60
346
0
30 Jul 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILMXAI
130
948
0
20 Jun 2018
Training Classifiers with Natural Language Explanations
Training Classifiers with Natural Language Explanations
Braden Hancock
P. Varma
Stephanie Wang
Martin Bringmann
Percy Liang
Christopher Ré
FAtt
87
155
0
10 May 2018
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
Shoaib Ahmed Siddiqui
Dominique Mercier
Mohsin Munir
Andreas Dengel
Sheraz Ahmed
FAttAI4TS
88
84
0
08 Feb 2018
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILMFAtt
158
1,526
1
19 Apr 2017
Generating Visual Explanations
Generating Visual Explanations
Lisa Anne Hendricks
Zeynep Akata
Marcus Rohrbach
Jeff Donahue
Bernt Schiele
Trevor Darrell
VLMFAtt
97
621
0
28 Mar 2016
Understanding Neural Networks Through Deep Visualization
Understanding Neural Networks Through Deep Visualization
J. Yosinski
Jeff Clune
Anh Totti Nguyen
Thomas J. Fuchs
Hod Lipson
FAttAI4CE
126
1,875
0
22 Jun 2015
Understanding Deep Image Representations by Inverting Them
Understanding Deep Image Representations by Inverting Them
Aravindh Mahendran
Andrea Vedaldi
FAtt
131
1,968
0
26 Nov 2014
OpenML: networked science in machine learning
OpenML: networked science in machine learning
Joaquin Vanschoren
Jan N. van Rijn
B. Bischl
Luís Torgo
FedMLAI4CE
179
1,328
0
29 Jul 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAttSSL
603
15,907
0
12 Nov 2013
1