ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.06715
  4. Cited By
Evaluating the Robustness of Interpretability Methods through
  Explanation Invariance and Equivariance

Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance

13 April 2023
Jonathan Crabbé
M. Schaar
    AAML
ArXivPDFHTML

Papers citing "Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance"

10 / 10 papers shown
Title
Concept Activation Regions: A Generalized Framework For Concept-Based
  Explanations
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
Jonathan Crabbé
M. Schaar
51
46
0
22 Sep 2022
Self-Interpretable Model with TransformationEquivariant Interpretation
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
24
23
0
09 Nov 2021
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
M. Bronstein
Joan Bruna
Taco S. Cohen
Petar Velivcković
GNN
174
1,104
0
27 Apr 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
53
415
0
15 Feb 2021
E(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate
  Interatomic Potentials
E(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials
Simon L. Batzner
Albert Musaelian
Lixin Sun
Mario Geiger
J. Mailoa
M. Kornbluth
N. Molinari
Tess E. Smidt
Boris Kozinsky
198
1,232
0
08 Jan 2021
On Translation Invariance in CNNs: Convolutional Layers can Exploit
  Absolute Spatial Location
On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location
O. Kayhan
J. C. V. Gemert
209
232
0
16 Mar 2020
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in
  Neural Networks
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks
D. Gopinath
Guy Katz
C. Păsăreanu
Clark W. Barrett
AAML
42
87
0
02 Oct 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
227
3,681
0
28 Feb 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
294
10,216
0
16 Nov 2016
SMOTE: Synthetic Minority Over-sampling Technique
SMOTE: Synthetic Minority Over-sampling Technique
Nitesh V. Chawla
Kevin W. Bowyer
Lawrence Hall
W. Kegelmeyer
AI4TS
160
25,247
0
09 Jun 2011
1