ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.04730
  4. Cited By
Understanding Black-box Predictions via Influence Functions

Understanding Black-box Predictions via Influence Functions

14 March 2017
Pang Wei Koh
Percy Liang
    TDI
ArXivPDFHTML

Papers citing "Understanding Black-box Predictions via Influence Functions"

50 / 620 papers shown
Title
Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
  Higher-Order Influence Functions
Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions
Ahmed Alaa
M. Schaar
UD
UQCV
BDL
TDI
29
53
0
29 Jun 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
Lav Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
34
289
0
26 Jun 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
42
584
0
26 Jun 2020
DeltaGrad: Rapid retraining of machine learning models
DeltaGrad: Rapid retraining of machine learning models
Yinjun Wu
Yan Sun
S. Davidson
MU
27
197
0
26 Jun 2020
Influence Functions in Deep Learning Are Fragile
Influence Functions in Deep Learning Are Fragile
S. Basu
Phillip E. Pope
S. Feizi
TDI
37
220
0
25 Jun 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
24
114
0
24 Jun 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
35
73
0
24 Jun 2020
Approximate Cross-Validation for Structured Models
Approximate Cross-Validation for Structured Models
S. Ghosh
William T. Stephenson
Tin D. Nguyen
Sameer K. Deshpande
Tamara Broderick
18
15
0
23 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
  Influence Functions
Frequentist Uncertainty in Recurrent Neural Networks via Blockwise Influence Functions
Ahmed Alaa
M. Schaar
UQCV
BDL
14
22
0
20 Jun 2020
Meta Approach to Data Augmentation Optimization
Meta Approach to Data Augmentation Optimization
Ryuichiro Hataya
Jan Zdenek
Kazuki Yoshizoe
Hideki Nakayama
32
34
0
14 Jun 2020
Dataset Condensation with Gradient Matching
Dataset Condensation with Gradient Matching
Bo Zhao
Konda Reddy Mopuri
Hakan Bilen
DD
41
479
0
10 Jun 2020
Coresets via Bilevel Optimization for Continual Learning and Streaming
Coresets via Bilevel Optimization for Continual Learning and Streaming
Zalan Borsos
Mojmír Mutný
Andreas Krause
CLL
38
227
0
06 Jun 2020
Interpretable Time-series Classification on Few-shot Samples
Interpretable Time-series Classification on Few-shot Samples
Wensi Tang
Lu Liu
Guodong Long
AI4TS
8
20
0
03 Jun 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through
  Influence Functions
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
28
165
0
14 May 2020
Ensembled sparse-input hierarchical networks for high-dimensional
  datasets
Ensembled sparse-input hierarchical networks for high-dimensional datasets
Jean Feng
N. Simon
19
4
0
11 May 2020
Towards Frequency-Based Explanation for Robust CNN
Towards Frequency-Based Explanation for Robust CNN
Zifan Wang
Yilin Yang
Ankit Shrivastava
Varun Rawal
Zihao Ding
AAML
FAtt
21
47
0
06 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
49
371
0
30 Apr 2020
Time Series Forecasting With Deep Learning: A Survey
Time Series Forecasting With Deep Learning: A Survey
Bryan Lim
S. Zohren
AI4TS
AI4CE
59
1,192
0
28 Apr 2020
Generative Data Augmentation for Commonsense Reasoning
Generative Data Augmentation for Commonsense Reasoning
Yiben Yang
Chaitanya Malaviya
Jared Fernandez
Swabha Swayamdipta
Ronan Le Bras
Ji-ping Wang
Chandra Bhagavatula
Yejin Choi
Doug Downey
LRM
30
91
0
24 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
31
8
0
23 Apr 2020
Complaint-driven Training Data Debugging for Query 2.0
Complaint-driven Training Data Debugging for Query 2.0
Weiyuan Wu
Lampros Flokas
Eugene Wu
Jiannan Wang
32
43
0
12 Apr 2020
RelatIF: Identifying Explanatory Training Examples via Relative
  Influence
RelatIF: Identifying Explanatory Training Examples via Relative Influence
Elnaz Barshan
Marc-Etienne Brunet
Gintare Karolina Dziugaite
TDI
47
30
0
25 Mar 2020
Towards Probabilistic Verification of Machine Unlearning
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
13
71
0
09 Mar 2020
Causal Interpretability for Machine Learning -- Problems, Methods and
  Evaluation
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CML
ELM
XAI
32
213
0
09 Mar 2020
Forgetting Outside the Box: Scrubbing Deep Networks of Information
  Accessible from Input-Output Observations
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations
Aditya Golatkar
Alessandro Achille
Stefano Soatto
MU
OOD
33
189
0
05 Mar 2020
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on
  Multiobjective Bilevel Optimisation
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
36
11
0
28 Feb 2020
Approximate Data Deletion from Machine Learning Models
Approximate Data Deletion from Machine Learning Models
Zachary Izzo
Mary Anne Smart
Kamalika Chaudhuri
James Zou
MU
22
251
0
24 Feb 2020
Data Heterogeneity Differential Privacy: From Theory to Algorithm
Data Heterogeneity Differential Privacy: From Theory to Algorithm
Yilin Kang
Jian Li
Yong Liu
Weiping Wang
33
1
0
20 Feb 2020
Unifying Graph Convolutional Neural Networks and Label Propagation
Unifying Graph Convolutional Neural Networks and Label Propagation
Hongwei Wang
J. Leskovec
GNN
30
166
0
17 Feb 2020
Convex Density Constraints for Computing Plausible Counterfactual
  Explanations
Convex Density Constraints for Computing Plausible Counterfactual Explanations
André Artelt
Barbara Hammer
19
47
0
12 Feb 2020
Decisions, Counterfactual Explanations and Strategic Behavior
Decisions, Counterfactual Explanations and Strategic Behavior
Stratis Tsirtsis
Manuel Gomez Rodriguez
27
58
0
11 Feb 2020
Interpretable Off-Policy Evaluation in Reinforcement Learning by
  Highlighting Influential Transitions
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions
Omer Gottesman
Joseph D. Futoma
Yao Liu
Soanli Parbhoo
Leo Anthony Celi
Emma Brunskill
Finale Doshi-Velez
OffRL
147
56
0
10 Feb 2020
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Thomas Baumhauer
Pascal Schöttle
Matthias Zeppelzauer
MU
114
130
0
07 Feb 2020
GraphLIME: Local Interpretable Model Explanations for Graph Neural
  Networks
GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Q. Huang
M. Yamada
Yuan Tian
Dinesh Singh
Dawei Yin
Yi-Ju Chang
FAtt
37
346
0
17 Jan 2020
Keeping Community in the Loop: Understanding Wikipedia Stakeholder
  Values for Machine Learning-Based Systems
Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems
C. E. Smith
Bowen Yu
Anjali Srivastava
Aaron L Halfaker
Loren G. Terveen
Haiyi Zhu
KELM
21
69
0
14 Jan 2020
Multi-Source Domain Adaptation for Text Classification via
  DistanceNet-Bandits
Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits
Han Guo
Ramakanth Pasunuru
Joey Tianyi Zhou
30
114
0
13 Jan 2020
Learning to Multi-Task Learn for Better Neural Machine Translation
Learning to Multi-Task Learn for Better Neural Machine Translation
Poorya Zaremoodi
Gholamreza Haffari
29
3
0
10 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
43
301
0
08 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
52
702
0
08 Jan 2020
Explainability Fact Sheets: A Framework for Systematic Assessment of
  Explainable Approaches
Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches
Kacper Sokol
Peter A. Flach
XAI
19
299
0
11 Dec 2019
Machine Unlearning
Machine Unlearning
Lucas Bourtoule
Varun Chandrasekaran
Christopher A. Choquette-Choo
Hengrui Jia
Adelin Travers
Baiwu Zhang
David Lie
Nicolas Papernot
MU
65
818
0
09 Dec 2019
Label-Consistent Backdoor Attacks
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
Aleksander Madry
AAML
11
383
0
05 Dec 2019
Less Is Better: Unweighted Data Subsampling via Influence Function
Less Is Better: Unweighted Data Subsampling via Influence Function
Zifeng Wang
Hong Zhu
Zhenhua Dong
Xiuqiang He
Shao-Lun Huang
TDI
34
51
0
03 Dec 2019
Automated Dependence Plots
Automated Dependence Plots
David I. Inouye
Liu Leqi
Joon Sik Kim
Bryon Aragam
Pradeep Ravikumar
12
1
0
02 Dec 2019
Revealing Perceptible Backdoors, without the Training Set, via the
  Maximum Achievable Misclassification Fraction Statistic
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
34
9
0
18 Nov 2019
Multi-modal Deep Guided Filtering for Comprehensible Medical Image
  Processing
Multi-modal Deep Guided Filtering for Comprehensible Medical Image Processing
Bernhard Stimpel
Christopher Syben
Franziska Schirrmacher
P. Hoelter
Arnd Dörfler
Andreas Maier
MedIm
24
23
0
18 Nov 2019
An explanation method for Siamese neural networks
An explanation method for Siamese neural networks
Lev V. Utkin
M. Kovalev
E. Kasimov
27
14
0
18 Nov 2019
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems
  With Limited Data
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Bo Li
D. Song
AAML
27
107
0
17 Nov 2019
On the computation of counterfactual explanations -- A survey
On the computation of counterfactual explanations -- A survey
André Artelt
Barbara Hammer
LRM
30
50
0
15 Nov 2019
Previous
123...101112139
Next