Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.10118
Cited By
Interpreting Black Box Predictions using Fisher Kernels
23 October 2018
Rajiv Khanna
Been Kim
Joydeep Ghosh
Oluwasanmi Koyejo
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpreting Black Box Predictions using Fisher Kernels"
23 / 23 papers shown
Title
Data Cleansing for GANs
Naoyuki Terashita
Hiroki Ohashi
Satoshi Hara
AAML
56
0
0
01 Apr 2025
Most Influential Subset Selection: Challenges, Promises, and Beyond
Yuzheng Hu
Pingbang Hu
Han Zhao
Jiaqi W. Ma
TDI
142
2
0
10 Jan 2025
Diffusion Attribution Score: Evaluating Training Data Influence in Diffusion Models
Jinxu Lin
Linwei Tao
Minjing Dong
Chang Xu
TDI
38
2
0
24 Oct 2024
Data Debugging is NP-hard for Classifiers Trained with SGD
Zizheng Guo
Pengyu Chen
Yanzhang Fu
Xuelong Li
28
0
0
02 Aug 2024
Intriguing Properties of Data Attribution on Diffusion Models
Xiaosen Zheng
Tianyu Pang
Chao Du
Jing Jiang
Min-Bin Lin
TDI
34
20
1
01 Nov 2023
Natural Example-Based Explainability: a Survey
Antonin Poché
Lucas Hervier
M. Bakkay
XAI
28
11
0
05 Sep 2023
Multi-resolution Interpretation and Diagnostics Tool for Natural Language Classifiers
P. Jalali
Nengfeng Zhou
Yufei Yu
AAML
33
0
0
06 Mar 2023
Influence Functions for Sequence Tagging Models
Sarthak Jain
Varun Manjunatha
Byron C. Wallace
A. Nenkova
TDI
35
8
0
25 Oct 2022
Argumentative Explanations for Pattern-Based Text Classifiers
Piyawat Lertvittayakumjorn
Francesca Toni
37
4
0
22 May 2022
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
34
25
0
25 Feb 2022
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
30
20
0
24 Feb 2022
Longitudinal Distance: Towards Accountable Instance Attribution
Rosina O. Weber
Prateek Goel
S. Amiri
G. Simpson
16
0
0
23 Aug 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
42
79
0
30 Apr 2021
Influence Estimation for Generative Adversarial Networks
Naoyuki Terashita
Hiroki Ohashi
Yuichi Nonaka
T. Kanemaru
TDI
30
12
0
20 Jan 2021
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
21
97
0
30 Nov 2020
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
Complaint-driven Training Data Debugging for Query 2.0
Weiyuan Wu
Lampros Flokas
Eugene Wu
Jiannan Wang
29
43
0
12 Apr 2020
RelatIF: Identifying Explanatory Training Examples via Relative Influence
Elnaz Barshan
Marc-Etienne Brunet
Gintare Karolina Dziugaite
TDI
42
30
0
25 Mar 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
436
0
26 Sep 2019
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray L. Buntine
29
31
0
11 Aug 2019
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
20
378
0
03 Jul 2019
Data Cleansing for Models Trained with SGD
Satoshi Hara
Atsushi Nitanda
Takanori Maehara
TDI
26
68
0
20 Jun 2019
1