ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.07429
  4. Cited By
TreeView: Peeking into Deep Neural Networks Via Feature-Space
  Partitioning

TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning

22 November 2016
Jayaraman J. Thiagarajan
B. Kailkhura
P. Sattigeri
Karthikeyan N. Ramamurthy
ArXivPDFHTML

Papers citing "TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning"

10 / 10 papers shown
Title
Neural Basis Models for Interpretability
Neural Basis Models for Interpretability
Filip Radenovic
Abhimanyu Dubey
D. Mahajan
FAtt
35
46
0
27 May 2022
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
47
176
0
07 Mar 2021
Why model why? Assessing the strengths and limitations of LIME
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
26
97
0
30 Nov 2020
Embedded Encoder-Decoder in Convolutional Networks Towards Explainable
  AI
Embedded Encoder-Decoder in Convolutional Networks Towards Explainable AI
A. Tavanaei
XAI
12
31
0
19 Jun 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
41
6,125
0
22 Oct 2019
Testing and verification of neural-network-based safety-critical control
  software: A systematic literature review
Testing and verification of neural-network-based safety-critical control software: A systematic literature review
Jin Zhang
Jingyue Li
25
47
0
05 Oct 2019
Generative Counterfactual Introspection for Explainable Deep Learning
Generative Counterfactual Introspection for Explainable Deep Learning
Shusen Liu
B. Kailkhura
Donald Loveland
Yong Han
25
90
0
06 Jul 2019
Interpreting Layered Neural Networks via Hierarchical Modular
  Representation
Interpreting Layered Neural Networks via Hierarchical Modular Representation
C. Watanabe
21
19
0
03 Oct 2018
Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
24
82
0
19 Jun 2018
Interpretable Deep Convolutional Neural Networks via Meta-learning
Interpretable Deep Convolutional Neural Networks via Meta-learning
Xuan Liu
Xiaoguang Wang
Stan Matwin
FaML
27
38
0
02 Feb 2018
1