ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.04730
  4. Cited By
Understanding Black-box Predictions via Influence Functions

Understanding Black-box Predictions via Influence Functions

14 March 2017
Pang Wei Koh
Percy Liang
    TDI
ArXivPDFHTML

Papers citing "Understanding Black-box Predictions via Influence Functions"

50 / 620 papers shown
Title
Attention-based Dynamic Subspace Learners for Medical Image Analysis
Attention-based Dynamic Subspace Learners for Medical Image Analysis
V. SukeshAdiga
Jose Dolz
H. Lombaert
23
1
0
18 Jun 2022
Benchmarking Heterogeneous Treatment Effect Models through the Lens of
  Interpretability
Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability
Jonathan Crabbé
Alicia Curth
Ioana Bica
M. Schaar
CML
29
16
0
16 Jun 2022
Neural Collapse: A Review on Modelling Principles and Generalization
Neural Collapse: A Review on Modelling Principles and Generalization
Vignesh Kothapalli
35
76
0
08 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
26
36
0
08 Jun 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
45
9
0
06 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
27
24
0
05 Jun 2022
Differentially Private Shapley Values for Data Evaluation
Differentially Private Shapley Values for Data Evaluation
Lauren Watson
R. Andreeva
Hao Yang
Rik Sarkar
TDI
FAtt
FedML
23
6
0
01 Jun 2022
Attack-Agnostic Adversarial Detection
Attack-Agnostic Adversarial Detection
Jiaxin Cheng
Mohamed Hussein
J. Billa
Wael AbdAlmageed
AAML
28
0
0
01 Jun 2022
Data Banzhaf: A Robust Data Valuation Framework for Machine Learning
Data Banzhaf: A Robust Data Valuation Framework for Machine Learning
Jiachen T. Wang
R. Jia
FedML
TDI
54
94
0
30 May 2022
Decoupling Knowledge from Memorization: Retrieval-augmented Prompt
  Learning
Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning
Xiang Chen
Lei Li
Ningyu Zhang
Xiaozhuan Liang
Shumin Deng
Chuanqi Tan
Fei Huang
Luo Si
Huajun Chen
VLM
35
52
0
29 May 2022
Membership Inference Attack Using Self Influence Functions
Membership Inference Attack Using Self Influence Functions
Gilad Cohen
Raja Giryes
TDI
32
12
0
26 May 2022
Towards Using Data-Influence Methods to Detect Noisy Samples in Source
  Code Corpora
Towards Using Data-Influence Methods to Detect Noisy Samples in Source Code Corpora
An Dau
Thang Nguyen-Duc
Hoang Thanh-Tung
Nghi D. Q. Bui
TDI
18
4
0
25 May 2022
VeriFi: Towards Verifiable Federated Unlearning
VeriFi: Towards Verifiable Federated Unlearning
Xiangshan Gao
Xingjun Ma
Jingyi Wang
Youcheng Sun
Bo Li
S. Ji
Peng Cheng
Jiming Chen
MU
73
46
0
25 May 2022
On the Interpretability of Regularisation for Neural Networks Through
  Model Gradient Similarity
On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity
Vincent Szolnoky
Viktor Andersson
Balázs Kulcsár
Rebecka Jörnsten
45
5
0
25 May 2022
ORCA: Interpreting Prompted Language Models via Locating Supporting Data
  Evidence in the Ocean of Pretraining Data
ORCA: Interpreting Prompted Language Models via Locating Supporting Data Evidence in the Ocean of Pretraining Data
Xiaochuang Han
Yulia Tsvetkov
29
29
0
25 May 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Shutong Wu
Sizhe Chen
Cihang Xie
Xiaolin Huang
AAML
51
27
0
24 May 2022
Learning to Ignore Adversarial Attacks
Learning to Ignore Adversarial Attacks
Yiming Zhang
Yan Zhou
Samuel Carton
Chenhao Tan
62
2
0
23 May 2022
LIA: Privacy-Preserving Data Quality Evaluation in Federated Learning
  Using a Lazy Influence Approximation
LIA: Privacy-Preserving Data Quality Evaluation in Federated Learning Using a Lazy Influence Approximation
Ljubomir Rokvic
Panayiotis Danassis
Sai Praneeth Karimireddy
Boi Faltings
TDI
34
1
0
23 May 2022
Argumentative Explanations for Pattern-Based Text Classifiers
Argumentative Explanations for Pattern-Based Text Classifiers
Piyawat Lertvittayakumjorn
Francesca Toni
50
4
0
22 May 2022
Cardinality-Minimal Explanations for Monotonic Neural Networks
Cardinality-Minimal Explanations for Monotonic Neural Networks
Ouns El Harzli
Bernardo Cuenca Grau
Ian Horrocks
FAtt
42
5
0
19 May 2022
Dataset Pruning: Reducing Training Data by Examining Generalization
  Influence
Dataset Pruning: Reducing Training Data by Examining Generalization Influence
Shuo Yang
Zeke Xie
Hanyu Peng
Minjing Xu
Mingming Sun
P. Li
DD
155
108
0
19 May 2022
Can counterfactual explanations of AI systems' predictions skew lay
  users' causal intuitions about the world? If so, can we correct for that?
Can counterfactual explanations of AI systems' predictions skew lay users' causal intuitions about the world? If so, can we correct for that?
Marko Tešić
U. Hahn
CML
17
5
0
12 May 2022
TracInAD: Measuring Influence for Anomaly Detection
TracInAD: Measuring Influence for Anomaly Detection
Hugo Thimonier
Fabrice Popineau
Arpad Rimmel
Bich-Liên Doan
Fabrice Daniel
TDI
21
6
0
03 May 2022
Adapting and Evaluating Influence-Estimation Methods for
  Gradient-Boosted Decision Trees
Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees
Jonathan Brophy
Zayd Hammoudeh
Daniel Lowd
TDI
34
22
0
30 Apr 2022
Doubting AI Predictions: Influence-Driven Second Opinion Recommendation
Doubting AI Predictions: Influence-Driven Second Opinion Recommendation
Maria De-Arteaga
Alexandra Chouldechova
Artur Dubrawski
33
4
0
29 Apr 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Machine Learning Security against Data Poisoning: Are We There Yet?
Machine Learning Security against Data Poisoning: Are We There Yet?
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
32
35
0
12 Apr 2022
The Sillwood Technologies System for the VoiceMOS Challenge 2022
The Sillwood Technologies System for the VoiceMOS Challenge 2022
Jiameng Gao
30
0
0
08 Apr 2022
Towards Reliable and Explainable AI Model for Solid Pulmonary Nodule
  Diagnosis
Towards Reliable and Explainable AI Model for Solid Pulmonary Nodule Diagnosis
Chenglong Wang
Yun-Hui Liu
Feng-Liang Wang
Chengxiu Zhang
Yida Wang
Mei Yuan
Guangze Yang
20
4
0
08 Apr 2022
Robust and Explainable Autoencoders for Unsupervised Time Series Outlier
  Detection---Extended Version
Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection---Extended Version
Tung Kieu
B. Yang
Chenjuan Guo
Christian S. Jensen
Yan Zhao
Feiteng Huang
Kai Zheng
AI4TS
27
37
0
07 Apr 2022
Concept Evolution in Deep Learning Training: A Unified Interpretation
  Framework and Discoveries
Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries
Haekyu Park
Seongmin Lee
Benjamin Hoover
Austin P. Wright
Omar Shaikh
Rahul Duggal
Nilaksh Das
Kevin Wenliang Li
Judy Hoffman
Duen Horng Chau
36
2
0
30 Mar 2022
Knowledge Removal in Sampling-based Bayesian Inference
Knowledge Removal in Sampling-based Bayesian Inference
Shaopeng Fu
Fengxiang He
Dacheng Tao
BDL
MU
30
27
0
24 Mar 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei Zhao
Yang Gao
Steffen Eger
AAML
ELM
37
20
0
21 Mar 2022
Repairing Brain-Computer Interfaces with Fault-Based Data Acquisition
Repairing Brain-Computer Interfaces with Fault-Based Data Acquisition
Cailin Winston
Caleb Winston
Chloe N. Winston
Claris Winston
Cleah Winston
Rajesh P. N. Rao
René Just
16
1
0
20 Mar 2022
Energy-Latency Attacks via Sponge Poisoning
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
57
29
0
14 Mar 2022
Label-efficient Hybrid-supervised Learning for Medical Image
  Segmentation
Label-efficient Hybrid-supervised Learning for Medical Image Segmentation
Junwen Pan
Qi Bi
Yanzhan Yang
Pengfei Zhu
Cheng Bian
29
21
0
10 Mar 2022
OpenTAL: Towards Open Set Temporal Action Localization
OpenTAL: Towards Open Set Temporal Action Localization
Wentao Bao
Qi Yu
Yu Kong
EDL
39
26
0
10 Mar 2022
Label-Free Explainability for Unsupervised Models
Label-Free Explainability for Unsupervised Models
Jonathan Crabbé
M. Schaar
FAtt
MILM
24
22
0
03 Mar 2022
PUMA: Performance Unchanged Model Augmentation for Training Data Removal
PUMA: Performance Unchanged Model Augmentation for Training Data Removal
Ga Wu
Masoud Hashemi
C. Srinivasa
MU
17
69
0
02 Mar 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
44
26
0
25 Feb 2022
First is Better Than Last for Language Data Influence
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
39
20
0
24 Feb 2022
Finding Safe Zones of policies Markov Decision Processes
Finding Safe Zones of policies Markov Decision Processes
Lee Cohen
Yishay Mansour
Michal Moshkovitz
27
1
0
23 Feb 2022
Margin-distancing for safe model explanation
Margin-distancing for safe model explanation
Tom Yan
Chicheng Zhang
28
3
0
23 Feb 2022
The Shapley Value in Machine Learning
The Shapley Value in Machine Learning
Benedek Rozemberczki
Lauren Watson
Péter Bayer
Hao-Tsung Yang
Oliver Kiss
Sebastian Nilsson
Rik Sarkar
TDI
FAtt
27
205
0
11 Feb 2022
Understanding Rare Spurious Correlations in Neural Networks
Understanding Rare Spurious Correlations in Neural Networks
Yao-Yuan Yang
Chi-Ning Chou
Kamalika Chaudhuri
AAML
31
25
0
10 Feb 2022
A Survey on Poisoning Attacks Against Supervised Machine Learning
A Survey on Poisoning Attacks Against Supervised Machine Learning
Wenjun Qiu
AAML
36
9
0
05 Feb 2022
Approximating Full Conformal Prediction at Scale via Influence Functions
Approximating Full Conformal Prediction at Scale via Influence Functions
Javier Abad
Umang Bhatt
Adrian Weller
Giovanni Cherubin
36
10
0
02 Feb 2022
Datamodels: Predicting Predictions from Training Data
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
Aleksander Madry
TDI
54
131
0
01 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
139
16
0
31 Jan 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That
  Backfire
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
41
7
0
28 Jan 2022
Previous
123...567...111213
Next