ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.04730
  4. Cited By
Understanding Black-box Predictions via Influence Functions

Understanding Black-box Predictions via Influence Functions

14 March 2017
Pang Wei Koh
Percy Liang
    TDI
ArXivPDFHTML

Papers citing "Understanding Black-box Predictions via Influence Functions"

50 / 620 papers shown
Title
Certified Data Removal from Machine Learning Models
Certified Data Removal from Machine Learning Models
Chuan Guo
Tom Goldstein
Awni Y. Hannun
Laurens van der Maaten
MU
52
420
0
08 Nov 2019
Optimizing Millions of Hyperparameters by Implicit Differentiation
Optimizing Millions of Hyperparameters by Implicit Differentiation
Jonathan Lorraine
Paul Vicol
David Duvenaud
DD
45
404
0
06 Nov 2019
On Second-Order Group Influence Functions for Black-Box Predictions
On Second-Order Group Influence Functions for Black-Box Predictions
S. Basu
Xuchen You
S. Feizi
TDI
27
68
0
01 Nov 2019
A Unified Framework for Data Poisoning Attack to Graph-based
  Semi-supervised Learning
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
AAML
35
78
0
30 Oct 2019
Distribution Density, Tails, and Outliers in Machine Learning: Metrics
  and Applications
Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications
Nicholas Carlini
Ulfar Erlingsson
Nicolas Papernot
OOD
OODD
26
62
0
29 Oct 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
40
206
0
27 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
41
6,125
0
22 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
The Local Elasticity of Neural Networks
The Local Elasticity of Neural Networks
Hangfeng He
Weijie J. Su
45
44
0
15 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
34
60
0
04 Oct 2019
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
36
613
0
30 Sep 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
437
0
26 Sep 2019
Data Valuation using Reinforcement Learning
Data Valuation using Reinforcement Learning
Jinsung Yoon
Sercan O. Arik
Tomas Pfister
TDI
36
174
0
25 Sep 2019
On Model Stability as a Function of Random Seed
On Model Stability as a Function of Random Seed
Pranava Madhyastha
Dhruv Batra
45
62
0
23 Sep 2019
Measure Contribution of Participants in Federated Learning
Measure Contribution of Participants in Federated Learning
Guan Wang
Charlie Xiaoqian Dang
Ziye Zhou
FedML
47
195
0
17 Sep 2019
Rewarding High-Quality Data via Influence Functions
Rewarding High-Quality Data via Influence Functions
A. Richardson
Aris Filos-Ratsikas
Boi Faltings
FedML
TDI
35
40
0
30 Aug 2019
Regional Tree Regularization for Interpretability in Black Box Models
Regional Tree Regularization for Interpretability in Black Box Models
Mike Wu
S. Parbhoo
M. C. Hughes
R. Kindle
Leo Anthony Celi
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
23
37
0
13 Aug 2019
LoRMIkA: Local rule-based model interpretability with k-optimal
  associations
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray Buntine
35
31
0
11 Aug 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
32
67
0
16 Jul 2019
Interpretable Counterfactual Explanations Guided by Prototypes
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
29
380
0
03 Jul 2019
Quantitative Verification of Neural Networks And its Security
  Applications
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
24
104
0
25 Jun 2019
Data Cleansing for Models Trained with SGD
Data Cleansing for Models Trained with SGD
Satoshi Hara
Atsushi Nitanda
Takanori Maehara
TDI
34
68
0
20 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
36
120
0
19 Jun 2019
Poisoning Attacks with Generative Adversarial Nets
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
21
63
0
18 Jun 2019
Membership Privacy for Machine Learning Models Through Knowledge
  Transfer
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Virat Shejwalkar
Amir Houmansadr
22
10
0
15 Jun 2019
Interpretable Neural Network Decoupling
Interpretable Neural Network Decoupling
Yuchao Li
Rongrong Ji
Shaohui Lin
Baochang Zhang
Chenqian Yan
Yongjian Wu
Feiyue Huang
Ling Shao
37
2
0
04 Jun 2019
Do Human Rationales Improve Machine Explanations?
Do Human Rationales Improve Machine Explanations?
Julia Strout
Ye Zhang
Raymond J. Mooney
19
57
0
31 May 2019
A backdoor attack against LSTM-based text classification systems
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
8
320
0
29 May 2019
Discovering Conditionally Salient Features with Statistical Guarantees
Discovering Conditionally Salient Features with Statistical Guarantees
Jaime Roquero Gimenez
James Zou
CML
16
12
0
29 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with
  Auto-Encoder
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi-Hua Zhou
AAML
19
104
0
22 May 2019
The Audio Auditor: User-Level Membership Inference in Internet of Things
  Voice Services
The Audio Auditor: User-Level Membership Inference in Internet of Things Voice Services
Yuantian Miao
Minhui Xue
Chao Chen
Lei Pan
Jinchao Zhang
Benjamin Zi Hao Zhao
Dali Kaafar
Yang Xiang
21
34
0
17 May 2019
Synthetic-Neuroscore: Using A Neuro-AI Interface for Evaluating
  Generative Adversarial Networks
Synthetic-Neuroscore: Using A Neuro-AI Interface for Evaluating Generative Adversarial Networks
Zhengwei Wang
Qi She
Alan F. Smeaton
T. Ward
Graham Healy
EGVM
21
11
0
10 May 2019
Interpretability with Accurate Small Models
Interpretability with Accurate Small Models
Abhishek Ghose
Balaraman Ravindran
23
1
0
04 May 2019
Data Cleaning for Accurate, Fair, and Robust Models: A Big Data - AI
  Integration Approach
Data Cleaning for Accurate, Fair, and Robust Models: A Big Data - AI Integration Approach
Ki Hyun Tae
Yuji Roh
Young H. Oh
Hyunsub Kim
Steven Euijong Whang
19
71
0
22 Apr 2019
Regression Concept Vectors for Bidirectional Explanations in
  Histopathology
Regression Concept Vectors for Bidirectional Explanations in Histopathology
Mara Graziani
Vincent Andrearczyk
Henning Muller
47
78
0
09 Apr 2019
Data Shapley: Equitable Valuation of Data for Machine Learning
Data Shapley: Equitable Valuation of Data for Machine Learning
Amirata Ghorbani
James Zou
TDI
FedML
42
756
0
05 Apr 2019
Interpreting Neural Networks Using Flip Points
Interpreting Neural Networks Using Flip Points
Roozbeh Yousefzadeh
D. O’Leary
AAML
FAtt
22
17
0
21 Mar 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
37
1,291
0
10 Mar 2019
Towards Efficient Data Valuation Based on the Shapley Value
Towards Efficient Data Valuation Based on the Shapley Value
R. Jia
David Dao
Wei Ping
F. Hubis
Nicholas Hynes
Nezihe Merve Gürel
Bo Li
Ce Zhang
D. Song
C. Spanos
TDI
22
400
0
27 Feb 2019
Quantifying contribution and propagation of error from computational
  steps, algorithms and hyperparameter choices in image classification
  pipelines
Quantifying contribution and propagation of error from computational steps, algorithms and hyperparameter choices in image classification pipelines
Aritra Chowdhury
M. Magdon-Ismail
B. Yener
35
0
0
21 Feb 2019
Towards Automatic Concept-based Explanations
Towards Automatic Concept-based Explanations
Amirata Ghorbani
James Wexler
James Zou
Been Kim
FAtt
LRM
38
19
0
07 Feb 2019
Repairing without Retraining: Avoiding Disparate Impact with
  Counterfactual Distributions
Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions
Hao Wang
Berk Ustun
Flavio du Pin Calmon
FaML
36
83
0
29 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
Quantifying Interpretability and Trust in Machine Learning Systems
Quantifying Interpretability and Trust in Machine Learning Systems
Philipp Schmidt
F. Biessmann
16
112
0
20 Jan 2019
Towards Aggregating Weighted Feature Attributions
Towards Aggregating Weighted Feature Attributions
Umang Bhatt
Pradeep Ravikumar
José M. F. Moura
FAtt
TDI
14
13
0
20 Jan 2019
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Ning Xie
Farley Lai
Derek Doran
Asim Kadav
CoGe
56
322
0
20 Jan 2019
Explaining Explanations to Society
Explaining Explanations to Society
Leilani H. Gilpin
Cecilia Testart
Nathaniel Fruchter
Julius Adebayo
XAI
24
34
0
19 Jan 2019
Class-Balanced Loss Based on Effective Number of Samples
Class-Balanced Loss Based on Effective Number of Samples
Huayu Chen
Menglin Jia
Nayeon Lee
Yang Song
Serge J. Belongie
91
2,233
0
16 Jan 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
52
1,423
0
14 Jan 2019
Previous
123...10111213
Next