ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.04214
  4. Cited By
Who's responsible? Jointly quantifying the contribution of the learning
  algorithm and training data
v1v2 (latest)

Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data

9 October 2019
G. Yona
Amirata Ghorbani
James Zou
    TDI
ArXiv (abs)PDFHTML

Papers citing "Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data"

14 / 14 papers shown
Title
Characterising Bias in Compressed Models
Characterising Bias in Compressed Models
Sara Hooker
Nyalleng Moorosi
Gregory Clark
Samy Bengio
Emily L. Denton
77
185
0
06 Oct 2020
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
  Generative Models
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
Sachit Menon
Alexandru Damian
Shijia Hu
Nikhil Ravi
Cynthia Rudin
OODDiffM
255
555
0
08 Mar 2020
A Distributional Framework for Data Valuation
A Distributional Framework for Data Valuation
Amirata Ghorbani
Michael P. Kim
James Zou
TDI
54
132
0
27 Feb 2020
What Do Compressed Deep Neural Networks Forget?
What Do Compressed Deep Neural Networks Forget?
Sara Hooker
Aaron Courville
Gregory Clark
Yann N. Dauphin
Andrea Frome
103
185
0
13 Nov 2019
Differential Privacy Has Disparate Impact on Model Accuracy
Differential Privacy Has Disparate Impact on Model Accuracy
Eugene Bagdasaryan
Vitaly Shmatikov
153
483
0
28 May 2019
Data Shapley: Equitable Valuation of Data for Machine Learning
Data Shapley: Equitable Valuation of Data for Machine Learning
Amirata Ghorbani
James Zou
TDIFedML
93
791
0
05 Apr 2019
Towards Efficient Data Valuation Based on the Shapley Value
Towards Efficient Data Valuation Based on the Shapley Value
R. Jia
David Dao
Wei Ping
F. Hubis
Nicholas Hynes
Nezihe Merve Gürel
Yue Liu
Ce Zhang
Basel Alomair
C. Spanos
TDI
87
421
0
27 Feb 2019
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured
  Data
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAttTDI
115
216
0
08 Aug 2018
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification
Michael P. Kim
Amirata Ghorbani
James Zou
MLAU
248
345
0
31 May 2018
Consistent Individualized Feature Attribution for Tree Ensembles
Consistent Individualized Feature Attribution for Tree Ensembles
Scott M. Lundberg
G. Erion
Su-In Lee
FAttTDI
86
1,406
0
12 Feb 2018
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAttAAML
150
15
0
29 Oct 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,135
0
22 May 2017
Inception-v4, Inception-ResNet and the Impact of Residual Connections on
  Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
Sergey Ioffe
Vincent Vanhoucke
Alexander A. Alemi
382
14,280
0
23 Feb 2016
Deep Learning Face Attributes in the Wild
Deep Learning Face Attributes in the Wild
Ziwei Liu
Ping Luo
Xiaogang Wang
Xiaoou Tang
CVBM
268
8,433
0
28 Nov 2014
1