ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.14754
  4. Cited By
Distilling Model Failures as Directions in Latent Space
v1v2 (latest)

Distilling Model Failures as Directions in Latent Space

29 June 2022
Saachi Jain
Hannah Lawrence
Ankur Moitra
Aleksander Madry
ArXiv (abs)PDFHTMLGithub (47★)

Papers citing "Distilling Model Failures as Directions in Latent Space"

50 / 72 papers shown
Title
Fine-Grained Bias Exploration and Mitigation for Group-Robust Classification
Fine-Grained Bias Exploration and Mitigation for Group-Robust Classification
Miaoyun Zhao
Qiang Zhang
C. Li
97
0
0
11 May 2025
Severing Spurious Correlations with Data Pruning
Severing Spurious Correlations with Data Pruning
Varun Mulchandani
Jung-Eun Kim
451
1
0
24 Mar 2025
Interpreting CLIP with Hierarchical Sparse Autoencoders
Interpreting CLIP with Hierarchical Sparse Autoencoders
Vladimir Zaigrajew
Hubert Baniecki
P. Biecek
254
1
0
27 Feb 2025
Controlled Training Data Generation with Diffusion Models
Controlled Training Data Generation with Diffusion Models
Teresa Yeo
Andrei Atanov
Harold Benoit
Aleksandr Alekseev
Ruchira Ray
Pooya Esmaeil Akhoondi
Amir Zamir
97
6
0
22 Mar 2024
Photorealistic Text-to-Image Diffusion Models with Deep Language
  Understanding
Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
Chitwan Saharia
William Chan
Saurabh Saxena
Lala Li
Jay Whang
...
Raphael Gontijo-Lopes
Tim Salimans
Jonathan Ho
David J Fleet
Mohammad Norouzi
VLM
466
6,077
0
23 May 2022
When does dough become a bagel? Analyzing the remaining mistakes on
  ImageNet
When does dough become a bagel? Analyzing the remaining mistakes on ImageNet
Vijay Vasudevan
Benjamin Caine
Raphael Gontijo-Lopes
Sara Fridovich-Keil
Rebecca Roelofs
VLMUQCV
76
59
0
09 May 2022
Learning to Split for Automatic Bias Detection
Learning to Split for Automatic Bias Detection
Yujia Bao
Regina Barzilay
60
21
0
28 Apr 2022
Hierarchical Text-Conditional Image Generation with CLIP Latents
Hierarchical Text-Conditional Image Generation with CLIP Latents
Aditya A. Ramesh
Prafulla Dhariwal
Alex Nichol
Casey Chu
Mark Chen
VLMDiffM
413
6,916
0
13 Apr 2022
Domino: Discovering Systematic Errors with Cross-Modal Embeddings
Domino: Discovering Systematic Errors with Cross-Modal Embeddings
Sabri Eyuboglu
M. Varma
Khaled Kamal Saab
Jean-Benoit Delbrouck
Christopher Lee-Messer
Jared A. Dunnmon
James Zou
Christopher Ré
89
148
0
24 Mar 2022
Datamodels: Predicting Predictions from Training Data
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
Aleksander Madry
TDI
133
142
0
01 Feb 2022
High-Resolution Image Synthesis with Latent Diffusion Models
High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach
A. Blattmann
Dominik Lorenz
Patrick Esser
Bjorn Ommer
3DV
502
15,788
0
20 Dec 2021
Just Train Twice: Improving Group Robustness without Training Group
  Information
Just Train Twice: Improving Group Robustness without Training Group Information
Emmy Liu
Behzad Haghgoo
Annie S. Chen
Aditi Raghunathan
Pang Wei Koh
Shiori Sagawa
Percy Liang
Chelsea Finn
OOD
107
563
0
19 Jul 2021
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual
  Explanations
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations
Abubakar Abid
Mert Yuksekgonul
James Zou
CML
87
64
0
24 Jun 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
60
92
0
11 May 2021
Discover the Unknown Biased Attribute of an Image Classifier
Discover the Unknown Biased Attribute of an Image Classifier
Zhiheng Li
Chenliang Xu
73
50
0
29 Apr 2021
Pervasive Label Errors in Test Sets Destabilize Machine Learning
  Benchmarks
Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks
Curtis G. Northcutt
Anish Athalye
Jonas W. Mueller
92
537
0
26 Mar 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIPVLM
999
29,926
0
26 Feb 2021
WILDS: A Benchmark of in-the-Wild Distribution Shifts
WILDS: A Benchmark of in-the-Wild Distribution Shifts
Pang Wei Koh
Shiori Sagawa
Henrik Marklund
Sang Michael Xie
Marvin Zhang
...
A. Kundaje
Emma Pierson
Sergey Levine
Chelsea Finn
Percy Liang
OOD
235
1,449
0
14 Dec 2020
Understanding Failures of Deep Networks via Robust Feature Extraction
Understanding Failures of Deep Networks via Robust Feature Extraction
Sahil Singla
Besmira Nushi
S. Shah
Ece Kamar
Eric Horvitz
FAtt
73
84
0
03 Dec 2020
Learning from others' mistakes: Avoiding dataset biases without modeling
  them
Learning from others' mistakes: Avoiding dataset biases without modeling them
Victor Sanh
Thomas Wolf
Yonatan Belinkov
Alexander M. Rush
74
116
0
02 Dec 2020
No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained
  Classification Problems
No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems
N. Sohoni
Jared A. Dunnmon
Geoffrey Angus
Albert Gu
Christopher Ré
87
252
0
25 Nov 2020
Towards Debiasing NLU Models from Unknown Biases
Towards Debiasing NLU Models from Unknown Biases
Prasetya Ajie Utama
N. Moosavi
Iryna Gurevych
87
155
0
25 Sep 2020
Deep Learning Applied to Chest X-Rays: Exploiting and Preventing
  Shortcuts
Deep Learning Applied to Chest X-Rays: Exploiting and Preventing Shortcuts
Sarah Jabbour
David Fouhey
Ella Kazerooni
Michael Sjoding
Jenna Wiens
OOD
52
49
0
21 Sep 2020
BREEDS: Benchmarks for Subpopulation Shift
BREEDS: Benchmarks for Subpopulation Shift
Shibani Santurkar
Dimitris Tsipras
Aleksander Madry
OOD
63
175
0
11 Aug 2020
Robustness to Spurious Correlations via Human Annotations
Robustness to Spurious Correlations via Human Annotations
Megha Srivastava
Tatsunori Hashimoto
Percy Liang
CMLOOD
51
90
0
13 Jul 2020
Learning from Failure: Training Debiased Classifier from Biased
  Classifier
Learning from Failure: Training Debiased Classifier from Biased Classifier
J. Nam
Hyuntak Cha
SungSoo Ahn
Jaeho Lee
Jinwoo Shin
84
150
0
06 Jul 2020
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution
  Generalization
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
Dan Hendrycks
Steven Basart
Norman Mu
Saurav Kadavath
Frank Wang
...
Samyak Parajuli
Mike Guo
Basel Alomair
Jacob Steinhardt
Justin Gilmer
OOD
363
1,757
0
29 Jun 2020
Noise or Signal: The Role of Image Backgrounds in Object Recognition
Noise or Signal: The Role of Image Backgrounds in Object Recognition
Kai Y. Xiao
Logan Engstrom
Andrew Ilyas
Aleksander Madry
148
387
0
17 Jun 2020
Are we done with ImageNet?
Are we done with ImageNet?
Lucas Beyer
Olivier J. Hénaff
Alexander Kolesnikov
Xiaohua Zhai
Aaron van den Oord
VLM
134
407
0
12 Jun 2020
From ImageNet to Image Classification: Contextualizing Progress on
  Benchmarks
From ImageNet to Image Classification: Contextualizing Progress on Benchmarks
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Andrew Ilyas
Aleksander Madry
77
135
0
22 May 2020
Shortcut Learning in Deep Neural Networks
Shortcut Learning in Deep Neural Networks
Robert Geirhos
J. Jacobsen
Claudio Michaelis
R. Zemel
Wieland Brendel
Matthias Bethge
Felix Wichmann
221
2,061
0
16 Apr 2020
Towards Fairer Datasets: Filtering and Balancing the Distribution of the
  People Subtree in the ImageNet Hierarchy
Towards Fairer Datasets: Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy
Kaiyu Yang
Klint Qinami
Li Fei-Fei
Jia Deng
Olga Russakovsky
126
323
0
16 Dec 2019
Distributionally Robust Neural Networks for Group Shifts: On the
  Importance of Regularization for Worst-Case Generalization
Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization
Shiori Sagawa
Pang Wei Koh
Tatsunori B. Hashimoto
Percy Liang
OOD
108
1,249
0
20 Nov 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
289
307
0
17 Oct 2019
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known
  Dataset Biases
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases
Christopher Clark
Mark Yatskar
Luke Zettlemoyer
OOD
88
467
0
09 Sep 2019
Unlearn Dataset Bias in Natural Language Inference by Fitting the
  Residual
Unlearn Dataset Bias in Natural Language Inference by Fitting the Residual
He He
Sheng Zha
Haohan Wang
70
199
0
28 Aug 2019
Fairness in Deep Learning: A Computational Perspective
Fairness in Deep Learning: A Computational Perspective
Mengnan Du
Fan Yang
Na Zou
Helen Zhou
FaMLFedML
51
234
0
23 Aug 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDaFaML
571
4,391
0
23 Aug 2019
Don't Take the Premise for Granted: Mitigating Artifacts in Natural
  Language Inference
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Yonatan Belinkov
Adam Poliak
Stuart M. Shieber
Benjamin Van Durme
Alexander M. Rush
63
95
0
09 Jul 2019
Invariant Risk Minimization
Invariant Risk Minimization
Martín Arjovsky
Léon Bottou
Ishaan Gulrajani
David Lopez-Paz
OOD
198
2,246
0
05 Jul 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
142
502
0
12 Jun 2019
Counterfactual Visual Explanations
Counterfactual Visual Explanations
Yash Goyal
Ziyan Wu
Jan Ernst
Dhruv Batra
Devi Parikh
Stefan Lee
CML
80
512
0
16 Apr 2019
Data Shapley: Equitable Valuation of Data for Machine Learning
Data Shapley: Equitable Valuation of Data for Machine Learning
Amirata Ghorbani
James Zou
TDIFedML
85
791
0
05 Apr 2019
ImageNet-trained CNNs are biased towards texture; increasing shape bias
  improves accuracy and robustness
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Robert Geirhos
Patricia Rubisch
Claudio Michaelis
Matthias Bethge
Felix Wichmann
Wieland Brendel
139
2,674
0
29 Nov 2018
Failing Loudly: An Empirical Study of Methods for Detecting Dataset
  Shift
Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift
Stephan Rabanser
Stephan Günnemann
Zachary Chase Lipton
68
371
0
29 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
152
1,970
0
08 Oct 2018
Grounding Visual Explanations
Grounding Visual Explanations
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
FAtt
59
230
0
25 Jul 2018
Explaining Image Classifiers by Counterfactual Generation
Explaining Image Classifiers by Counterfactual Generation
C. Chang
Elliot Creager
Anna Goldenberg
David Duvenaud
VLM
78
265
0
20 Jul 2018
Benchmarking Neural Network Robustness to Common Corruptions and Surface
  Variations
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations
Dan Hendrycks
Thomas G. Dietterich
OOD
84
202
0
04 Jul 2018
Fairness Without Demographics in Repeated Loss Minimization
Fairness Without Demographics in Repeated Loss Minimization
Tatsunori B. Hashimoto
Megha Srivastava
Hongseok Namkoong
Percy Liang
FaML
117
585
0
20 Jun 2018
12
Next