ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.00442
  4. Cited By
Evaluations and Methods for Explanation through Robustness Analysis

Evaluations and Methods for Explanation through Robustness Analysis

31 May 2020
Cheng-Yu Hsieh
Chih-Kuan Yeh
Xuanqing Liu
Pradeep Ravikumar
Seungyeon Kim
Sanjiv Kumar
Cho-Jui Hsieh
    XAI
ArXivPDFHTML

Papers citing "Evaluations and Methods for Explanation through Robustness Analysis"

13 / 13 papers shown
Title
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
111
7
0
06 Feb 2025
Inpainting the Gaps: A Novel Framework for Evaluating Explanation
  Methods in Vision Transformers
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
45
0
0
17 Jun 2024
Interpreting Robustness Proofs of Deep Neural Networks
Interpreting Robustness Proofs of Deep Neural Networks
Debangshu Banerjee
Avaljot Singh
Gagandeep Singh
AAML
29
5
0
31 Jan 2023
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
103
0
17 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
41
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
34
4
0
09 Nov 2022
Why Deep Surgical Models Fail?: Revisiting Surgical Action Triplet
  Recognition through the Lens of Robustness
Why Deep Surgical Models Fail?: Revisiting Surgical Action Triplet Recognition through the Lens of Robustness
Ya-Hsin Cheng
Lihao Liu
Shujun Wang
Yueming Jin
Carola-Bibiane Schönlieb
Angelica I. Aviles-Rivero
26
7
0
18 Sep 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
42
26
0
25 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
28
41
0
15 Feb 2022
On Locality of Local Explanation Models
On Locality of Local Explanation Models
Sahra Ghalebikesabi
Lucile Ter-Minassian
Karla Diaz-Ordaz
Chris Holmes
FedML
FAtt
28
39
0
24 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for
  Feature Importance Explanations
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODD
LRM
FAtt
29
91
0
01 Jun 2021
On the Sensitivity and Stability of Model Interpretations in NLP
On the Sensitivity and Stability of Model Interpretations in NLP
Fan Yin
Zhouxing Shi
Cho-Jui Hsieh
Kai-Wei Chang
FAtt
19
33
0
18 Apr 2021
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
249
1,842
0
03 Feb 2017
1