ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.10720
  4. Cited By
Trainable Projected Gradient Method for Robust Fine-tuning

Trainable Projected Gradient Method for Robust Fine-tuning

19 March 2023
Junjiao Tian
Xiaoliang Dai
Chih-Yao Ma
Zecheng He
Yen-Cheng Liu
Z. Kira
ArXivPDFHTML

Papers citing "Trainable Projected Gradient Method for Robust Fine-tuning"

6 / 6 papers shown
Title
Robust Learning of Diverse Code Edits
Robust Learning of Diverse Code Edits
Tushar Aggarwal
Swayam Singh
Abhijeet Awasthi
Aditya Kanade
Nagarajan Natarajan
SyDa
151
0
0
05 Mar 2025
SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning
SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning
Bac Nguyen
Stefan Uhlich
Fabien Cardinaux
Lukas Mauch
Marzieh Edraki
Aaron Courville
OODD
CLL
VLM
54
3
0
03 Jul 2024
LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different
  Views
LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views
Yuji Roh
Qingyun Liu
Huan Gui
Zhe Yuan
Yujin Tang
...
Liang Liu
Shuchao Bi
Lichan Hong
Ed H. Chi
Zhe Zhao
43
1
0
07 Feb 2024
Universal Prompt Tuning for Graph Neural Networks
Universal Prompt Tuning for Graph Neural Networks
Taoran Fang
Yunchao Zhang
Yang Yang
Chunping Wang
Lei Chen
22
47
0
30 Sep 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
305
7,434
0
11 Nov 2021
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for
  Out-of-Distribution Robustness
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Sang Michael Xie
Ananya Kumar
Robbie Jones
Fereshte Khani
Tengyu Ma
Percy Liang
OOD
163
62
0
08 Dec 2020
1