ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.04730
  4. Cited By
Understanding Black-box Predictions via Influence Functions

Understanding Black-box Predictions via Influence Functions

14 March 2017
Pang Wei Koh
Percy Liang
    TDI
ArXivPDFHTML

Papers citing "Understanding Black-box Predictions via Influence Functions"

50 / 620 papers shown
Title
Towards Out-Of-Distribution Generalization: A Survey
Towards Out-Of-Distribution Generalization: A Survey
Jiashuo Liu
Zheyan Shen
Yue He
Xingxuan Zhang
Renzhe Xu
Han Yu
Peng Cui
CML
OOD
69
519
0
31 Aug 2021
Towards Self-Explainable Graph Neural Network
Towards Self-Explainable Graph Neural Network
Enyan Dai
Suhang Wang
36
84
0
26 Aug 2021
Longitudinal Distance: Towards Accountable Instance Attribution
Longitudinal Distance: Towards Accountable Instance Attribution
Rosina O. Weber
Prateek Goel
S. Amiri
G. Simpson
16
0
0
23 Aug 2021
Influence-guided Data Augmentation for Neural Tensor Completion
Influence-guided Data Augmentation for Neural Tensor Completion
Sejoon Oh
Sungchul Kim
Ryan A. Rossi
Srijan Kumar
28
10
0
23 Aug 2021
Data Pricing in Machine Learning Pipelines
Data Pricing in Machine Learning Pipelines
Zicun Cong
Xuan Luo
J. Pei
Feida Zhu
Yong Zhang
28
46
0
18 Aug 2021
Unified Regularity Measures for Sample-wise Learning and Generalization
Unified Regularity Measures for Sample-wise Learning and Generalization
Chi Zhang
Xiaoning Ma
Yu Liu
Le Wang
Yuanqi Su
Yuehu Liu
39
1
0
09 Aug 2021
Semi-Supervised Active Learning with Temporal Output Discrepancy
Semi-Supervised Active Learning with Temporal Output Discrepancy
Siyu Huang
Tianyang Wang
Haoyi Xiong
Jun Huan
Dejing Dou
UQCV
30
66
0
29 Jul 2021
Explainable artificial intelligence (XAI) in deep learning-based medical
  image analysis
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
Bas H. M. van der Velden
Hugo J. Kuijf
K. Gilhuijs
M. Viergever
XAI
45
640
0
22 Jul 2021
CHEF: A Cheap and Fast Pipeline for Iteratively Cleaning Label
  Uncertainties (Technical Report)
CHEF: A Cheap and Fast Pipeline for Iteratively Cleaning Label Uncertainties (Technical Report)
Yinjun Wu
James Weimer
S. Davidson
23
4
0
19 Jul 2021
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment
  Analysis
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis
Xingbo Wang
Jianben He
Zhihua Jin
Muqiao Yang
Yong Wang
Huamin Qu
24
75
0
17 Jul 2021
A Survey on Data Augmentation for Text Classification
A Survey on Data Augmentation for Text Classification
Markus Bayer
M. Kaufhold
Christian A. Reuter
57
336
0
07 Jul 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
33
71
0
04 Jul 2021
Demystifying statistical learning based on efficient influence functions
Demystifying statistical learning based on efficient influence functions
Oliver Hines
O. Dukes
Karla Diaz-Ordaz
S. Vansteelandt
TDI
32
110
0
01 Jul 2021
Combining Feature and Instance Attribution to Detect Artifacts
Combining Feature and Instance Attribution to Detect Artifacts
Pouya Pezeshkpour
Sarthak Jain
Sameer Singh
Byron C. Wallace
TDI
23
43
0
01 Jul 2021
The Threat of Offensive AI to Organizations
The Threat of Offensive AI to Organizations
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
Xinming Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
38
81
0
30 Jun 2021
Certifiable Machine Unlearning for Linear Models
Certifiable Machine Unlearning for Linear Models
Ananth Mahadevan
M. Mathioudakis
MU
14
45
0
29 Jun 2021
Towards Automated Evaluation of Explanations in Graph Neural Networks
Towards Automated Evaluation of Explanations in Graph Neural Networks
Bannihati Kumar Vanya
Balaji Ganesan
Aniket Saxena
Devbrat Sharma
Arvind Agarwal
XAI
GNN
29
4
0
22 Jun 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
37
132
0
21 Jun 2021
Accumulative Poisoning Attacks on Real-time Data
Accumulative Poisoning Attacks on Real-time Data
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
37
20
0
18 Jun 2021
Poisoning and Backdooring Contrastive Learning
Poisoning and Backdooring Contrastive Learning
Nicholas Carlini
Andreas Terzis
46
158
0
17 Jun 2021
Certification of embedded systems based on Machine Learning: A survey
Certification of embedded systems based on Machine Learning: A survey
Guillaume Vidot
Christophe Gabreau
I. Ober
Iulian Ober
11
12
0
14 Jun 2021
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness,
  and Semantic Evaluation
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation
Wei Zhang
Ziming Huang
Yada Zhu
Guangnan Ye
Xiaodong Cui
Fan Zhang
36
17
0
09 Jun 2021
On Memorization in Probabilistic Deep Generative Models
On Memorization in Probabilistic Deep Generative Models
G. V. D. Burg
Christopher K. I. Williams
TDI
25
59
0
06 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
33
47
0
03 Jun 2021
HERALD: An Annotation Efficient Method to Detect User Disengagement in
  Social Conversations
HERALD: An Annotation Efficient Method to Detect User Disengagement in Social Conversations
Weixin Liang
Kai-Hui Liang
Zhou Yu
45
15
0
01 Jun 2021
Evaluating the Correctness of Explainable AI Algorithms for
  Classification
Evaluating the Correctness of Explainable AI Algorithms for Classification
Orcun Yalcin
Xiuyi Fan
Siyuan Liu
XAI
FAtt
21
15
0
20 May 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
34
140
0
17 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
34
184
0
15 May 2021
Information-theoretic Evolution of Model Agnostic Global Explanations
Information-theoretic Evolution of Model Agnostic Global Explanations
Sukriti Verma
Nikaash Puri
Piyush B. Gupta
Balaji Krishnamurthy
FAtt
29
0
0
14 May 2021
Counterfactual Explanations for Neural Recommenders
Counterfactual Explanations for Neural Recommenders
Khanh Tran
Azin Ghazimatin
Rishiraj Saha Roy
AAML
CML
60
65
0
11 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
22
88
0
11 May 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
166
68
0
04 May 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
47
79
0
30 Apr 2021
Influence Based Defense Against Data Poisoning Attacks in Online
  Learning
Influence Based Defense Against Data Poisoning Attacks in Online Learning
Sanjay Seetharaman
Shubham Malaviya
KV Rosni
Manish Shukla
S. Lodha
TDI
AAML
39
9
0
24 Apr 2021
Improving Question Answering Model Robustness with Synthetic Adversarial
  Data Generation
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation
Max Bartolo
Tristan Thrush
Robin Jia
Sebastian Riedel
Pontus Stenetorp
Douwe Kiela
AAML
28
103
0
18 Apr 2021
A Backdoor Attack against 3D Point Cloud Classifiers
A Backdoor Attack against 3D Point Cloud Classifiers
Zhen Xiang
David J. Miller
Siheng Chen
Xi Li
G. Kesidis
3DPC
AAML
38
76
0
12 Apr 2021
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
Yi Zeng
Won Park
Z. Morley Mao
R. Jia
AAML
14
210
0
07 Apr 2021
Contrastive Explanations for Explaining Model Adaptations
Contrastive Explanations for Explaining Model Adaptations
André Artelt
Fabian Hinder
Valerie Vaquet
Robert Feldhans
Barbara Hammer
57
4
0
06 Apr 2021
Explaining the Road Not Taken
Explaining the Road Not Taken
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
XAI
37
9
0
27 Mar 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison
  Linear Classifiers?
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
32
9
0
23 Mar 2021
Data Cleansing for Deep Neural Networks with Storage-efficient
  Approximation of Influence Functions
Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions
Kenji Suzuki
Yoshiyuki Kobayashi
T. Narihira
TDI
31
5
0
22 Mar 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
32
29
0
10 Mar 2021
Forest Guided Smoothing
Forest Guided Smoothing
I. Verdinelli
Larry A. Wasserman
33
3
0
08 Mar 2021
Evaluating Robustness of Counterfactual Explanations
Evaluating Robustness of Counterfactual Explanations
André Artelt
Valerie Vaquet
Riza Velioglu
Fabian Hinder
Johannes Brinkrolf
M. Schilling
Barbara Hammer
14
46
0
03 Mar 2021
Contrastive Explanations for Model Interpretability
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
49
95
0
02 Mar 2021
Efficient Client Contribution Evaluation for Horizontal Federated
  Learning
Efficient Client Contribution Evaluation for Horizontal Federated Learning
Jie Zhao
Xinghua Zhu
Jianzong Wang
Jing Xiao
FedML
41
28
0
26 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
45
25
0
17 Feb 2021
Connecting Interpretability and Robustness in Decision Trees through
  Separation
Connecting Interpretability and Robustness in Decision Trees through Separation
Michal Moshkovitz
Yao-Yuan Yang
Kamalika Chaudhuri
33
22
0
14 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
37
26
0
10 Feb 2021
Explaining Inference Queries with Bayesian Optimization
Explaining Inference Queries with Bayesian Optimization
Brandon Lockhart
Jinglin Peng
Weiyuan Wu
Jiannan Wang
Eugene Wu
26
7
0
10 Feb 2021
Previous
123...789...111213
Next