ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.04494
  4. Cited By
Regional Tree Regularization for Interpretability in Black Box Models

Regional Tree Regularization for Interpretability in Black Box Models

13 August 2019
Mike Wu
S. Parbhoo
M. C. Hughes
R. Kindle
Leo Anthony Celi
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
ArXivPDFHTML

Papers citing "Regional Tree Regularization for Interpretability in Black Box Models"

19 / 19 papers shown
Title
Learning Interpretable Logic Rules from Deep Vision Models
Chuqin Geng
Yuhe Jiang
Ziyu Zhao
Haolin Ye
Zhaoyue Wang
X. Si
NAI
FAtt
VLM
80
1
0
13 Mar 2025
Tree-Based Leakage Inspection and Control in Concept Bottleneck Models
Tree-Based Leakage Inspection and Control in Concept Bottleneck Models
Angelos Ragkousis
Sonali Parbhoo
47
1
0
08 Oct 2024
Enabling Regional Explainability by Automatic and Model-agnostic Rule
  Extraction
Enabling Regional Explainability by Automatic and Model-agnostic Rule Extraction
Yu Chen
Tianyu Cui
Alexander Capstick
Nan Fletcher-Loyd
Payam Barnaghi
41
0
0
25 Jun 2024
A Design Trajectory Map of Human-AI Collaborative Reinforcement Learning
  Systems: Survey and Taxonomy
A Design Trajectory Map of Human-AI Collaborative Reinforcement Learning Systems: Survey and Taxonomy
Zhaoxing Li
30
2
0
16 May 2024
CA-Stream: Attention-based pooling for interpretable image recognition
CA-Stream: Attention-based pooling for interpretable image recognition
Felipe Torres
Hanwei Zhang
R. Sicre
Stéphane Ayache
Yannis Avrithis
62
0
0
23 Apr 2024
An Interpretable Power System Transient Stability Assessment Method with
  Expert Guiding Neural-Regression-Tree
An Interpretable Power System Transient Stability Assessment Method with Expert Guiding Neural-Regression-Tree
Hanxuan Wang
Na Lu
Zixuan Wang
Jiacheng Liu
Jun Liu
28
0
0
03 Apr 2024
Interpretable Reinforcement Learning for Robotics and Continuous Control
Interpretable Reinforcement Learning for Robotics and Continuous Control
Rohan R. Paleja
Letian Chen
Yaru Niu
Andrew Silva
Zhaoxin Li
...
K. Chang
H. E. Tseng
Yan Wang
S. Nageshrao
Matthew C. Gombolay
41
7
0
16 Nov 2023
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
31
39
0
07 Dec 2022
Computing Abductive Explanations for Boosted Trees
Computing Abductive Explanations for Boosted Trees
Gilles Audemard
Jean-Marie Lagniez
Pierre Marquis
N. Szczepanski
44
12
0
16 Sep 2022
A Survey of Neural Trees
A Survey of Neural Trees
Haoling Li
Mingli Song
Mengqi Xue
Haofei Zhang
Jingwen Ye
Lechao Cheng
Mingli Song
AI4CE
22
6
0
07 Sep 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAI
FAtt
LRM
28
62
0
29 Jul 2022
NN2Rules: Extracting Rule List from Neural Networks
NN2Rules: Extracting Rule List from Neural Networks
G. R. Lal
Varun Mithal
24
2
0
04 Jul 2022
The Health Gym: Synthetic Health-Related Datasets for the Development of
  Reinforcement Learning Algorithms
The Health Gym: Synthetic Health-Related Datasets for the Development of Reinforcement Learning Algorithms
N. Kuo
Mark Polizzotto
S. Finfer
Federico Garcia
Anders Sönnerborg
Maurizio Zazzi
Michael Böhm
Louisa R Jorm
S. Barbieri
OOD
38
29
0
12 Mar 2022
Learning Interpretable, High-Performing Policies for Autonomous Driving
Learning Interpretable, High-Performing Policies for Autonomous Driving
Rohan R. Paleja
Yaru Niu
Andrew Silva
Chace Ritchie
Sugju Choi
Matthew C. Gombolay
29
16
0
04 Feb 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
30
399
0
20 Jan 2022
Defense Against Explanation Manipulation
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
46
11
0
08 Nov 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
146
665
0
28 Dec 2020
On Explaining Decision Trees
On Explaining Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
24
85
0
21 Oct 2020
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
1