ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.14754
  4. Cited By
Distilling Model Failures as Directions in Latent Space
v1v2 (latest)

Distilling Model Failures as Directions in Latent Space

29 June 2022
Saachi Jain
Hannah Lawrence
Ankur Moitra
Aleksander Madry
ArXiv (abs)PDFHTMLGithub (47★)

Papers citing "Distilling Model Failures as Directions in Latent Space"

22 / 72 papers shown
Title
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification
Michael P. Kim
Amirata Ghorbani
James Zou
MLAU
248
345
0
31 May 2018
Datasheets for Datasets
Datasheets for Datasets
Timnit Gebru
Jamie Morgenstern
Briana Vecchione
Jennifer Wortman Vaughan
Hanna M. Wallach
Hal Daumé
Kate Crawford
292
2,201
0
23 Mar 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
129
592
0
21 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
242
1,849
0
30 Nov 2017
CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep
  Learning
CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning
Pranav Rajpurkar
Jeremy Irvin
Kaylie Zhu
Brandon Yang
Hershel Mehta
...
Aarti Bagul
C. Langlotz
K. Shpanskaya
M. Lungren
A. Ng
LM&MA
99
2,712
0
14 Nov 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
319
12,151
0
19 Jun 2017
Real Time Image Saliency for Black Box Classifiers
Real Time Image Saliency for Black Box Classifiers
P. Dabkowski
Y. Gal
72
593
0
22 May 2017
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILMFAtt
158
1,526
1
19 Apr 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAttAAML
83
1,526
0
11 Apr 2017
Counterfactual Fairness
Counterfactual Fairness
Matt J. Kusner
Joshua R. Loftus
Chris Russell
Ricardo M. A. Silva
FaML
228
1,588
0
20 Mar 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
219
2,910
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,027
0
04 Mar 2017
Visualizing Deep Neural Network Decisions: Prediction Difference
  Analysis
Visualizing Deep Neural Network Decisions: Prediction Difference Analysis
L. Zintgraf
Taco S. Cohen
T. Adel
Max Welling
FAtt
147
709
0
15 Feb 2017
Fairness Beyond Disparate Treatment & Disparate Impact: Learning
  Classification without Disparate Mistreatment
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
Muhammad Bilal Zafar
Isabel Valera
Manuel Gomez Rodriguez
Krishna P. Gummadi
FaML
208
1,214
0
26 Oct 2016
Equality of Opportunity in Supervised Learning
Equality of Opportunity in Supervised Learning
Moritz Hardt
Eric Price
Nathan Srebro
FaML
236
4,341
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Rethinking the Inception Architecture for Computer Vision
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DVBDL
886
27,427
0
02 Dec 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
282
19,129
0
20 Dec 2014
Certifying and removing disparate impact
Certifying and removing disparate impact
Michael Feldman
Sorelle A. Friedler
John Moeller
C. Scheidegger
Suresh Venkatasubramanian
FaML
208
1,996
0
11 Dec 2014
Deep Learning Face Attributes in the Wild
Deep Learning Face Attributes in the Wild
Ziwei Liu
Ping Luo
Xiaogang Wang
Xiaoou Tang
CVBM
253
8,429
0
28 Nov 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
289
14,968
1
21 Dec 2013
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
Previous
12