ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.14974
  4. Cited By
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification

Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification

26 September 2022
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
    AAML
ArXivPDFHTML

Papers citing "Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification"

50 / 55 papers shown
Title
Exploring the Trade-off between Plausibility, Change Intensity and
  Adversarial Power in Counterfactual Explanations using Multi-objective
  Optimization
Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization
Javier Del Ser
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Francisco Herrera
Andreas Holzinger
AAML
51
4
0
20 May 2022
Surrogate Gap Minimization Improves Sharpness-Aware Training
Surrogate Gap Minimization Improves Sharpness-Aware Training
Juntang Zhuang
Boqing Gong
Liangzhe Yuan
Huayu Chen
Hartwig Adam
Nicha Dvornek
S. Tatikonda
James Duncan
Ting Liu
68
157
0
15 Mar 2022
Vision Transformer Slimming: Multi-Dimension Searching in Continuous
  Optimization Space
Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
Arnav Chavan
Zhiqiang Shen
Zhuang Liu
Zechun Liu
Kwang-Ting Cheng
Eric P. Xing
ViT
86
71
0
03 Jan 2022
How to train your ViT? Data, Augmentation, and Regularization in Vision
  Transformers
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Andreas Steiner
Alexander Kolesnikov
Xiaohua Zhai
Ross Wightman
Jakob Uszkoreit
Lucas Beyer
ViT
107
633
0
18 Jun 2021
Delving Deep into the Generalization of Vision Transformers under
  Distribution Shifts
Delving Deep into the Generalization of Vision Transformers under Distribution Shifts
Chongzhi Zhang
Mingyuan Zhang
Shanghang Zhang
Daisheng Jin
Qiang-feng Zhou
Zhongang Cai
Haiyu Zhao
Xianglong Liu
Ziwei Liu
64
105
0
14 Jun 2021
Scaling Vision Transformers
Scaling Vision Transformers
Xiaohua Zhai
Alexander Kolesnikov
N. Houlsby
Lucas Beyer
ViT
134
1,087
0
08 Jun 2021
When Vision Transformers Outperform ResNets without Pre-training or
  Strong Data Augmentations
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
Xiangning Chen
Cho-Jui Hsieh
Boqing Gong
ViT
87
328
0
03 Jun 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
418
2,674
0
04 May 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
  learning representations with expert knowledge graphs: the MonuMAI cultural
  heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
Siham Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
121
78
0
24 Apr 2021
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
384
6,768
0
23 Dec 2020
Optimized Loss Functions for Object detection: A Case Study on Nighttime
  Vehicle Detection
Optimized Loss Functions for Object detection: A Case Study on Nighttime Vehicle Detection
Shang Jiang
Haoran Qin
Binglin Zhang
Jieyu Zheng
63
10
0
11 Nov 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
654
41,103
0
22 Oct 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
101
378
0
30 Apr 2020
Multi-Objective Counterfactual Explanations
Multi-Objective Counterfactual Explanations
Susanne Dandl
Christoph Molnar
Martin Binder
B. Bischl
62
258
0
23 Apr 2020
Bounding boxes for weakly supervised segmentation: Global constraints
  get close to full supervision
Bounding boxes for weakly supervised segmentation: Global constraints get close to full supervision
H. Kervadec
Jose Dolz
Shanshan Wang
Eric Granger
Ismail Ben Ayed
64
84
0
14 Apr 2020
Measuring the Quality of Explanations: The System Causability Scale
  (SCS). Comparing Human and Machine Explanations
Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations
Andreas Holzinger
André M. Carrington
Heimo Muller
LRM
XAI
ELM
66
307
0
19 Dec 2019
Analysis of Explainers of Black Box Deep Neural Networks for Computer
  Vision: A Survey
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Vanessa Buhrmester
David Münch
Michael Arens
MLAU
FaML
XAI
AAML
96
363
0
27 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
73
819
0
06 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
121
6,269
0
22 Oct 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
571
2,670
0
03 Sep 2019
Neural Probabilistic Logic Programming in DeepProbLog
Neural Probabilistic Logic Programming in DeepProbLog
Robin Manhaeve
Sebastijan Dumancic
Angelika Kimmig
T. Demeester
Luc de Raedt
NAI
94
556
0
18 Jul 2019
Interpretable Counterfactual Explanations Guided by Prototypes
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
72
384
0
03 Jul 2019
Learning World Graphs to Accelerate Hierarchical Reinforcement Learning
Learning World Graphs to Accelerate Hierarchical Reinforcement Learning
Wenling Shang
Alexander R. Trott
Stephan Zheng
Caiming Xiong
R. Socher
65
18
0
01 Jul 2019
Kandinsky Patterns
Kandinsky Patterns
Heimo Mueller
Andreas Holzinger
29
32
0
03 Jun 2019
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Amir-Hossein Karimi
Gilles Barthe
Borja Balle
Isabel Valera
91
321
0
27 May 2019
The Twin-System Approach as One Generic Solution for XAI: An Overview of
  ANN-CBR Twins for Explaining Deep Learning
The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
Mark T. Keane
Eoin M. Kenny
59
13
0
20 May 2019
Explaining Machine Learning Classifiers through Diverse Counterfactual
  Explanations
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
R. Mothilal
Amit Sharma
Chenhao Tan
CML
113
1,021
0
19 May 2019
Neural-Symbolic Computing: An Effective Methodology for Principled
  Integration of Machine Learning and Reasoning
Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning
Artur Garcez
Marco Gori
Luís C. Lamb
Luciano Serafini
Michael Spranger
Son N. Tran
NAI
106
292
0
15 May 2019
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and
  Sentences From Natural Supervision
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
Jiayuan Mao
Chuang Gan
Pushmeet Kohli
J. Tenenbaum
Jiajun Wu
NAI
134
698
0
26 Apr 2019
LYRICS: a General Interface Layer to Integrate Logic Inference and Deep
  Learning
LYRICS: a General Interface Layer to Integrate Logic Inference and Deep Learning
G. Marra
Francesco Giannini
Michelangelo Diligenti
Marco Gori
AI4CE
61
11
0
18 Mar 2019
Weakly Supervised Complementary Parts Models for Fine-Grained Image
  Classification from the Bottom Up
Weakly Supervised Complementary Parts Models for Fine-Grained Image Classification from the Bottom Up
Weifeng Ge
Xiangru Lin
Yizhou Yu
88
257
0
07 Mar 2019
Measuring Compositionality in Representation Learning
Measuring Compositionality in Representation Learning
Jacob Andreas
CoGe
65
149
0
19 Feb 2019
Integrating Learning and Reasoning with Deep Logic Models
Integrating Learning and Reasoning with Deep Logic Models
G. Marra
Francesco Giannini
Michelangelo Diligenti
Marco Gori
NAI
80
57
0
14 Jan 2019
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
134
1,967
0
08 Oct 2018
Stakeholders in Explainable AI
Stakeholders in Explainable AI
Alun D. Preece
Daniel Harborne
Dave Braines
Richard J. Tomsett
Supriyo Chakraborty
40
156
0
29 Sep 2018
Women also Snowboard: Overcoming Bias in Captioning Models (Extended
  Abstract)
Women also Snowboard: Overcoming Bias in Captioning Models (Extended Abstract)
Lisa Anne Hendricks
Kaylee Burns
Kate Saenko
Trevor Darrell
Anna Rohrbach
105
480
0
02 Jul 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
76
526
0
21 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
126
941
0
20 Jun 2018
Encoder-Decoder with Atrous Separable Convolution for Semantic Image
  Segmentation
Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
Liang-Chieh Chen
Yukun Zhu
George Papandreou
Florian Schroff
Hartwig Adam
SSeg
439
13,143
0
07 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
126
3,961
0
06 Feb 2018
Inverse Classification for Comparison-based Interpretability in Machine
  Learning
Inverse Classification for Comparison-based Interpretability in Machine Learning
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
115
101
0
22 Dec 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
109
2,354
0
01 Nov 2017
What Does Explainable AI Really Mean? A New Conceptualization of
  Perspectives
What Does Explainable AI Really Mean? A New Conceptualization of Perspectives
Derek Doran
Sarah Schulz
Tarek R. Besold
XAI
68
439
0
02 Oct 2017
Squeeze-and-Excitation Networks
Squeeze-and-Excitation Networks
Jie Hu
Li Shen
Samuel Albanie
Gang Sun
Enhua Wu
424
26,500
0
05 Sep 2017
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
288
2,264
0
24 Jun 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
242
4,265
0
22 Jun 2017
Teaching Compositionality to CNNs
Teaching Compositionality to CNNs
Austin Stone
Hua-Yan Wang
Michael Stark
Yi Liu
D. Phoenix
Dileep George
CoGe
50
54
0
14 Jun 2017
Logic Tensor Networks for Semantic Image Interpretation
Logic Tensor Networks for Semantic Image Interpretation
Ivan Donadello
Luciano Serafini
Artur Garcez
90
211
0
24 May 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,939
0
22 May 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
303
20,023
0
07 Oct 2016
12
Next