ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.12351
  4. Cited By
Are Your Explanations Reliable? Investigating the Stability of LIME in
  Explaining Text Classifiers by Marrying XAI and Adversarial Attack

Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial Attack

21 May 2023
Christopher Burger
Lingwei Chen
Thai Le
    FAtt
    AAML
ArXivPDFHTML

Papers citing "Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial Attack"

7 / 7 papers shown
Title
Q-FAKER: Query-free Hard Black-box Attack via Controlled Generation
Q-FAKER: Query-free Hard Black-box Attack via Controlled Generation
CheolWon Na
YunSeok Choi
Jee-Hyong Lee
AAML
37
0
0
18 Apr 2025
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
Christopher Burger
Charles Walter
Thai Le
AAML
148
1
0
20 Jan 2025
Towards Robust and Accurate Stability Estimation of Local Surrogate Models in Text-based Explainable AI
Christopher Burger
Charles Walter
Thai Le
Lingwei Chen
AAML
26
0
0
03 Jan 2025
Faithfulness and the Notion of Adversarial Sensitivity in NLP
  Explanations
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna
Niladri Sett
AAML
29
2
0
26 Sep 2024
An Analysis of LIME for Text Data
An Analysis of LIME for Text Data
Dina Mardaoui
Damien Garreau
FAtt
134
45
0
23 Oct 2020
Looking Deeper into Tabular LIME
Looking Deeper into Tabular LIME
Damien Garreau
U. V. Luxburg
FAtt
LMTD
109
30
0
25 Aug 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1