ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.00786
  4. Cited By
The Out-of-Distribution Problem in Explainability and Search Methods for
  Feature Importance Explanations

The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations

1 June 2021
Peter Hase
Harry Xie
Joey Tianyi Zhou
    OODD
    LRM
    FAtt
ArXivPDFHTML

Papers citing "The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations"

14 / 64 papers shown
Title
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal
  Transport Perspective
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective
M. Serrurier
Franck Mamalet
Thomas Fel
Louis Bethune
Thibaut Boissin
AAML
FAtt
32
4
0
14 Jun 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
43
16
0
13 Jun 2022
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of
  NLP Models
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models
Kaiji Lu
Anupam Datta
21
0
0
01 Jun 2022
A Sea of Words: An In-Depth Analysis of Anchors for Text Data
A Sea of Words: An In-Depth Analysis of Anchors for Text Data
Gianluigi Lopardo
F. Precioso
Damien Garreau
27
6
0
27 May 2022
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study
  in Hate Speech Detection
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Esma Balkir
I. Nejadgholi
Kathleen C. Fraser
S. Kiritchenko
FAtt
41
27
0
06 May 2022
Text Transformations in Contrastive Self-Supervised Learning: A Review
Text Transformations in Contrastive Self-Supervised Learning: A Review
Amrita Bhattacharjee
Mansooreh Karami
Huan Liu
SSL
29
23
0
22 Mar 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
26
41
0
15 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
168
0
14 Feb 2022
Framework for Evaluating Faithfulness of Local Explanations
Framework for Evaluating Faithfulness of Local Explanations
S. Dasgupta
Nave Frost
Michal Moshkovitz
FAtt
116
61
0
01 Feb 2022
Rethinking Attention-Model Explainability through Faithfulness Violation
  Test
Rethinking Attention-Model Explainability through Faithfulness Violation Test
Y. Liu
Haoliang Li
Yangyang Guo
Chen Kong
Jing Li
Shiqi Wang
FAtt
121
43
0
28 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
28
396
0
20 Jan 2022
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
82
69
0
02 Mar 2021
Feature Importance Ranking for Deep Learning
Feature Importance Ranking for Deep Learning
Maksymilian Wojtas
Ke Chen
144
115
0
18 Oct 2020
Previous
12