ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.10054
  4. Cited By
Fine-Tuning can Distort Pretrained Features and Underperform
  Out-of-Distribution

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution

21 February 2022
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
    OODD
ArXiv (abs)PDFHTML

Papers citing "Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution"

42 / 492 papers shown
Title
DAFT: Distilling Adversarially Fine-tuned Models for Better OOD
  Generalization
DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Anshul Nasery
Sravanti Addepalli
Praneeth Netrapalli
Prateek Jain
OODFedML
88
1
0
19 Aug 2022
Patching open-vocabulary models by interpolating weights
Patching open-vocabulary models by interpolating weights
Gabriel Ilharco
Mitchell Wortsman
S. Gadre
Shuran Song
Hannaneh Hajishirzi
Simon Kornblith
Ali Farhadi
Ludwig Schmidt
VLMKELM
139
176
0
10 Aug 2022
On Transfer of Adversarial Robustness from Pretraining to Downstream
  Tasks
On Transfer of Adversarial Robustness from Pretraining to Downstream Tasks
Laura Fee Nern
Harsh Raj
Maurice Georgi
Yash Sharma
AAML
97
4
0
07 Aug 2022
An Image is Worth One Word: Personalizing Text-to-Image Generation using
  Textual Inversion
An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
Rinon Gal
Yuval Alaluf
Yuval Atzmon
Or Patashnik
Amit H. Bermano
Gal Chechik
Daniel Cohen-Or
176
1,903
0
02 Aug 2022
Exploring the Design of Adaptation Protocols for Improved Generalization
  and Machine Learning Safety
Exploring the Design of Adaptation Protocols for Improved Generalization and Machine Learning Safety
Puja Trivedi
Danai Koutra
Jayaraman J. Thiagarajan
AAML
57
0
0
26 Jul 2022
Discrete Key-Value Bottleneck
Discrete Key-Value Bottleneck
Frederik Trauble
Anirudh Goyal
Nasim Rahaman
Michael C. Mozer
Kenji Kawaguchi
Yoshua Bengio
Bernhard Schölkopf
CLL
92
23
0
22 Jul 2022
Two-Stage Fine-Tuning: A Novel Strategy for Learning Class-Imbalanced
  Data
Two-Stage Fine-Tuning: A Novel Strategy for Learning Class-Imbalanced Data
Taha ValizadehAslani
Yiwen Shi
Jing Wang
Ping Ren
Yi Zhang
Meng Hu
Lianggong Zhao
Hualou Liang
68
7
0
22 Jul 2022
Assaying Out-Of-Distribution Generalization in Transfer Learning
Assaying Out-Of-Distribution Generalization in Transfer Learning
F. Wenzel
Andrea Dittadi
Peter V. Gehler
Carl-Johann Simon-Gabriel
Max Horn
...
Chris Russell
Thomas Brox
Bernt Schiele
Bernhard Schölkopf
Francesco Locatello
OODOODDAAML
143
76
0
19 Jul 2022
Calibrated ensembles can mitigate accuracy tradeoffs under distribution
  shift
Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Ananya Kumar
Tengyu Ma
Percy Liang
Aditi Raghunathan
UQCVOODDOOD
104
39
0
18 Jul 2022
Contrastive Adapters for Foundation Model Group Robustness
Contrastive Adapters for Foundation Model Group Robustness
Michael Zhang
Christopher Ré
VLM
54
64
0
14 Jul 2022
A Data-Based Perspective on Transfer Learning
A Data-Based Perspective on Transfer Learning
Saachi Jain
Hadi Salman
Alaa Khaddaj
Eric Wong
Sung Min Park
Aleksander Madry
84
39
0
12 Jul 2022
Test-Time Adaptation via Self-Training with Nearest Neighbor Information
Test-Time Adaptation via Self-Training with Nearest Neighbor Information
M-U Jang
Sae-Young Chung
Hye Won Chung
OODTTA
99
63
0
08 Jul 2022
Motley: Benchmarking Heterogeneity and Personalization in Federated
  Learning
Motley: Benchmarking Heterogeneity and Personalization in Federated Learning
Shan-shan Wu
Tian Li
Zachary B. Charles
Yu Xiao
Ziyu Liu
Zheng Xu
Virginia Smith
FedML
101
45
0
18 Jun 2022
Prefix Conditioning Unifies Language and Label Supervision
Prefix Conditioning Unifies Language and Label Supervision
Kuniaki Saito
Kihyuk Sohn
Xinming Zhang
Chun-Liang Li
Chen-Yu Lee
Kate Saenko
Tomas Pfister
VLMCLIP
97
16
0
02 Jun 2022
Positive Unlabeled Contrastive Learning
Positive Unlabeled Contrastive Learning
Anish Acharya
Sujay Sanghavi
Li Jing
Bhargav Bhushanam
Dhruv Choudhary
Michael G. Rabbat
Inderjit Dhillon
SSL
65
11
0
01 Jun 2022
Understanding new tasks through the lens of training data via
  exponential tilting
Understanding new tasks through the lens of training data via exponential tilting
Subha Maity
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
104
10
0
26 May 2022
An Empirical Study on Distribution Shift Robustness From the Perspective
  of Pre-Training and Data Augmentation
An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Ziquan Liu
Yi Tian Xu
Yuanhong Xu
Qi Qian
Hao Li
Rong Jin
Xiangyang Ji
Antoni B. Chan
OOD
89
16
0
25 May 2022
Multimodal Knowledge Alignment with Reinforcement Learning
Multimodal Knowledge Alignment with Reinforcement Learning
Youngjae Yu
Jiwan Chung
Heeseung Yun
Jack Hessel
Jinho Park
...
Prithviraj Ammanabrolu
Rowan Zellers
Ronan Le Bras
Gunhee Kim
Yejin Choi
VLM
160
37
0
25 May 2022
ER-Test: Evaluating Explanation Regularization Methods for Language
  Models
ER-Test: Evaluating Explanation Regularization Methods for Language Models
Brihi Joshi
Aaron Chan
Ziyi Liu
Shaoliang Nie
Maziar Sanjabi
Hamed Firooz
Xiang Ren
AAML
93
6
0
25 May 2022
Linear Connectivity Reveals Generalization Strategies
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
331
48
0
24 May 2022
Evaluating the Impact of Model Scale for Compositional Generalization in
  Semantic Parsing
Evaluating the Impact of Model Scale for Compositional Generalization in Semantic Parsing
Linlu Qiu
Peter Shaw
Panupong Pasupat
Tianze Shi
Jonathan Herzig
Emily Pitler
Fei Sha
Kristina Toutanova
AI4CELRM
158
54
0
24 May 2022
Representation Projection Invariance Mitigates Representation Collapse
Representation Projection Invariance Mitigates Representation Collapse
Anastasia Razdaibiedina
A. Khetan
Zohar Karnin
Daniel Khashabi
Vishaal Kapoor
V. Madan
98
5
0
23 May 2022
Test-Time Robust Personalization for Federated Learning
Test-Time Robust Personalization for Federated Learning
Liang Jiang
Tao R. Lin
FedMLOODTTA
157
48
0
22 May 2022
Diverse Weight Averaging for Out-of-Distribution Generalization
Diverse Weight Averaging for Out-of-Distribution Generalization
Alexandre Ramé
Matthieu Kirchmeyer
Thibaud Rahier
A. Rakotomamonjy
Patrick Gallinari
Matthieu Cord
OOD
258
138
0
19 May 2022
How to Fine-tune Models with Few Samples: Update, Data Augmentation, and
  Test-time Augmentation
How to Fine-tune Models with Few Samples: Update, Data Augmentation, and Test-time Augmentation
Yujin Kim
Jaehoon Oh
Sungnyun Kim
Se-Young Yun
78
6
0
13 May 2022
Alleviating Representational Shift for Continual Fine-tuning
Alleviating Representational Shift for Continual Fine-tuning
Shibo Jie
Zhi-Hong Deng
Ziheng Li
CLL
89
11
0
22 Apr 2022
Empirical Evaluation and Theoretical Analysis for Representation
  Learning: A Survey
Empirical Evaluation and Theoretical Analysis for Representation Learning: A Survey
Kento Nozawa
Issei Sato
AI4TS
139
5
0
18 Apr 2022
Last Layer Re-Training is Sufficient for Robustness to Spurious
  Correlations
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
Polina Kirichenko
Pavel Izmailov
A. Wilson
OOD
119
339
0
06 Apr 2022
Beyond Separability: Analyzing the Linear Transferability of Contrastive
  Representations to Related Subpopulations
Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
Jeff Z. HaoChen
Colin Wei
Ananya Kumar
Tengyu Ma
77
39
0
06 Apr 2022
"This is my unicorn, Fluffy": Personalizing frozen vision-language
  representations
"This is my unicorn, Fluffy": Personalizing frozen vision-language representations
Niv Cohen
Rinon Gal
E. Meirom
Gal Chechik
Yuval Atzmon
VLMMLLM
122
88
0
04 Apr 2022
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
  Domain Adaptation
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Kendrick Shen
Robbie Jones
Ananya Kumar
Sang Michael Xie
Jeff Z. HaoChen
Tengyu Ma
Percy Liang
SSL
119
89
0
01 Apr 2022
CADG: A Model Based on Cross Attention for Domain Generalization
CADG: A Model Based on Cross Attention for Domain Generalization
Chengqiu Dai
Yingqiao Lin
Fan Li
Xiyao Li
Don Xie
OOD
57
3
0
31 Mar 2022
Domain Generalization by Mutual-Information Regularization with
  Pre-trained Models
Domain Generalization by Mutual-Information Regularization with Pre-trained Models
Junbum Cha
Kyungjae Lee
Sungrae Park
Sanghyuk Chun
OOD
118
138
0
21 Mar 2022
Model soups: averaging weights of multiple fine-tuned models improves
  accuracy without increasing inference time
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
199
1,013
1
10 Mar 2022
Geodesic Multi-Modal Mixup for Robust Fine-Tuning
Geodesic Multi-Modal Mixup for Robust Fine-Tuning
Changdae Oh
Junhyuk So
Hoyoon Byun
Yongtaek Lim
Minchul Shin
Jong-June Jeon
Kyungwoo Song
139
30
0
08 Mar 2022
Delving Deeper into Cross-lingual Visual Question Answering
Delving Deeper into Cross-lingual Visual Question Answering
Chen Cecilia Liu
Jonas Pfeiffer
Anna Korhonen
Ivan Vulić
Iryna Gurevych
105
8
0
15 Feb 2022
Provable Domain Generalization via Invariant-Feature Subspace Recovery
Provable Domain Generalization via Invariant-Feature Subspace Recovery
Haoxiang Wang
Haozhe Si
Yue Liu
Han Zhao
OOD
137
35
0
30 Jan 2022
General Greedy De-bias Learning
General Greedy De-bias Learning
Xinzhe Han
Shuhui Wang
Chi Su
Qingming Huang
Qi Tian
109
9
0
20 Dec 2021
Twitter-COMMs: Detecting Climate, COVID, and Military Multimodal
  Misinformation
Twitter-COMMs: Detecting Climate, COVID, and Military Multimodal Misinformation
Giscard Biamby
Grace Luo
Trevor Darrell
Anna Rohrbach
68
26
0
16 Dec 2021
How Well Do Sparse Imagenet Models Transfer?
How Well Do Sparse Imagenet Models Transfer?
Eugenia Iofinova
Alexandra Peste
Mark Kurtz
Dan Alistarh
125
41
0
26 Nov 2021
Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains
Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains
Xinyu Zhang
S. Gu
Yutaka Matsuo
Yusuke Iwasawa
VLM
104
40
0
25 Nov 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
148
100
0
17 Jun 2021
Previous
123...1089