ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.04730
  4. Cited By
Understanding Black-box Predictions via Influence Functions

Understanding Black-box Predictions via Influence Functions

14 March 2017
Pang Wei Koh
Percy Liang
    TDI
ArXivPDFHTML

Papers citing "Understanding Black-box Predictions via Influence Functions"

20 / 620 papers shown
Title
IcoRating: A Deep-Learning System for Scam ICO Identification
IcoRating: A Deep-Learning System for Scam ICO Identification
Shuqing Bian
Zhenpeng Deng
F. Li
Will Monroe
Peng Shi
...
Sikuang Wang
William Yang Wang
Arianna Yuan
Tianwei Zhang
Jiwei Li
38
37
0
08 Mar 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic
  Corrections
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
27
63
0
21 Feb 2018
Explaining First Impressions: Modeling, Recognizing, and Explaining
  Apparent Personality from Videos
Explaining First Impressions: Modeling, Recognizing, and Explaining Apparent Personality from Videos
Hugo Jair Escalante
Heysem Kaya
A. A. Salah
Sergio Escalera
Yağmur Güçlütürk
...
Furkan Gürpinar
Achmadnoer Sukma Wicaksana
Cynthia C. S. Liem
Marcel van Gerven
R. Lier
33
61
0
02 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
17
810
0
02 Feb 2018
Less is More: Culling the Training Set to Improve Robustness of Deep
  Neural Networks
Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks
Yongshuai Liu
Jiyu Chen
Hao Chen
AAML
27
14
0
09 Jan 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Bo Li
Kimberly Lu
D. Song
AAML
SILM
44
1,808
0
15 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,391
0
08 Dec 2017
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
77
1,800
0
30 Nov 2017
Contextual Outlier Interpretation
Contextual Outlier Interpretation
Ninghao Liu
DongHwa Shin
Xia Hu
25
72
0
28 Nov 2017
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
Rushil Anirudh
Jayaraman J. Thiagarajan
R. Sridhar
T. Bremer
FAtt
AAML
23
12
0
15 Nov 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAtt
AAML
80
858
0
29 Oct 2017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial
  Examples
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Pin-Yu Chen
Yash Sharma
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
AAML
24
637
0
13 Sep 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
27
626
0
29 Aug 2017
Efficient Data Representation by Selecting Prototypes with Importance
  Weights
Efficient Data Representation by Selecting Prototypes with Importance Weights
Karthik S. Gurumoorthy
Amit Dhurandhar
Guillermo Cecchi
Charu Aggarwal
29
22
0
05 Jul 2017
Interpretability via Model Extraction
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
29
129
0
29 Jun 2017
MAGIX: Model Agnostic Globally Interpretable Explanations
MAGIX: Model Agnostic Globally Interpretable Explanations
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
FAtt
32
41
0
22 Jun 2017
Contextual Explanation Networks
Contextual Explanation Networks
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric Xing
CML
37
82
0
29 May 2017
Interpreting Blackbox Models via Model Extraction
Interpreting Blackbox Models via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
35
170
0
23 May 2017
Deep Reinforcement Learning: An Overview
Deep Reinforcement Learning: An Overview
Yuxi Li
OffRL
VLM
104
1,505
0
25 Jan 2017
Making the V in VQA Matter: Elevating the Role of Image Understanding in
  Visual Question Answering
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
Yash Goyal
Tejas Khot
D. Summers-Stay
Dhruv Batra
Devi Parikh
CoGe
158
3,136
0
02 Dec 2016
Previous
123...111213