Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.03156
Cited By
Better Fine-Tuning by Reducing Representational Collapse
6 August 2020
Armen Aghajanyan
Akshat Shrivastava
Anchit Gupta
Naman Goyal
Luke Zettlemoyer
S. Gupta
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Better Fine-Tuning by Reducing Representational Collapse"
50 / 60 papers shown
Title
IM-BERT: Enhancing Robustness of BERT through the Implicit Euler Method
Mihyeon Kim
Juhyoung Park
Youngbin Kim
34
0
0
11 May 2025
Fine-Tuning without Performance Degradation
Han Wang
Adam White
Martha White
OnRL
161
0
0
01 May 2025
The Best Instruction-Tuning Data are Those That Fit
Dylan Zhang
Qirun Dai
Hao Peng
ALM
117
3
0
06 Feb 2025
Cross-lingual Transfer of Reward Models in Multilingual Alignment
Jiwoo Hong
Noah Lee
Rodrigo Martínez-Castaño
César Rodríguez
James Thorne
48
4
0
23 Oct 2024
Reasoning in Large Language Models: A Geometric Perspective
Romain Cosentino
Sarath Shekkizhar
LRM
44
2
0
02 Jul 2024
Generalization Measures for Zero-Shot Cross-Lingual Transfer
Saksham Bassi
Duygu Ataman
Kyunghyun Cho
29
0
0
24 Apr 2024
Token-Efficient Leverage Learning in Large Language Models
Yuanhao Zeng
Min Wang
Yihang Wang
Yingxia Shao
31
0
0
01 Apr 2024
From Robustness to Improved Generalization and Calibration in Pre-trained Language Models
Josip Jukić
Jan Snajder
37
0
0
31 Mar 2024
Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models
Junhao Zheng
Shengjie Qiu
Qianli Ma
25
9
0
13 Dec 2023
Dynamic Corrective Self-Distillation for Better Fine-Tuning of Pretrained Models
Ibtihel Amara
Vinija Jain
Aman Chadha
32
0
0
12 Dec 2023
Weigh Your Own Words: Improving Hate Speech Counter Narrative Generation via Attention Regularization
Helena Bonaldi
Giuseppe Attanasio
Debora Nozza
Marco Guerini
20
6
0
05 Sep 2023
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models
Paul Röttger
Hannah Rose Kirk
Bertie Vidgen
Giuseppe Attanasio
Federico Bianchi
Dirk Hovy
ALM
ELM
AILaw
25
125
0
02 Aug 2023
Class-Incremental Learning based on Label Generation
Yijia Shao
Yiduo Guo
Dongyan Zhao
Bin Liu
CLL
24
14
0
22 Jun 2023
Entailment as Robust Self-Learner
Jiaxin Ge
Hongyin Luo
Yoon Kim
James R. Glass
39
3
0
26 May 2023
DiTTO: A Feature Representation Imitation Approach for Improving Cross-Lingual Transfer
Shanu Kumar
Abbaraju Soujanya
Sandipan Dandapat
Sunayana Sitaram
Monojit Choudhury
VLM
25
1
0
04 Mar 2023
How to prepare your task head for finetuning
Yi Ren
Shangmin Guo
Wonho Bae
Danica J. Sutherland
24
14
0
11 Feb 2023
Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Jongwoo Ko
Seungjoon Park
Minchan Jeong
S. Hong
Euijai Ahn
Duhyeuk Chang
Se-Young Yun
23
6
0
03 Feb 2023
Curriculum-Guided Abstractive Summarization
Sajad Sotudeh
Hanieh Deilamsalehy
Franck Dernoncourt
Nazli Goharian
35
1
0
02 Feb 2023
A Stability Analysis of Fine-Tuning a Pre-Trained Model
Z. Fu
Anthony Man-Cho So
Nigel Collier
23
3
0
24 Jan 2023
On-the-fly Denoising for Data Augmentation in Natural Language Understanding
Tianqing Fang
Wenxuan Zhou
Fangyu Liu
Hongming Zhang
Yangqiu Song
Muhao Chen
41
1
0
20 Dec 2022
BARTSmiles: Generative Masked Language Models for Molecular Representations
Gayane Chilingaryan
Hovhannes Tamoyan
Ani Tevosyan
N. Babayan
L. Khondkaryan
Karen Hambardzumyan
Zaven Navoyan
Hrant Khachatrian
Armen Aghajanyan
SSL
29
25
0
29 Nov 2022
Precisely the Point: Adversarial Augmentations for Faithful and Informative Text Generation
Wenhao Wu
Wei Li
Jiachen Liu
Xinyan Xiao
Sujian Li
Yajuan Lyu
33
3
0
22 Oct 2022
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Yoonho Lee
Annie S. Chen
Fahim Tajwar
Ananya Kumar
Huaxiu Yao
Percy Liang
Chelsea Finn
OOD
56
197
0
20 Oct 2022
Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods
Evan Crothers
Nathalie Japkowicz
H. Viktor
DeLMO
38
107
0
13 Oct 2022
COLO: A Contrastive Learning based Re-ranking Framework for One-Stage Summarization
Chen An
Ming Zhong
Zhiyong Wu
Qinen Zhu
Xuanjing Huang
Xipeng Qiu
23
22
0
29 Sep 2022
Adapting Pretrained Text-to-Text Models for Long Text Sequences
Wenhan Xiong
Anchit Gupta
Shubham Toshniwal
Yashar Mehdad
Wen-tau Yih
RALM
VLM
54
30
0
21 Sep 2022
Socially Enhanced Situation Awareness from Microblogs using Artificial Intelligence: A Survey
Rabindra Lamsal
Aaron Harwood
M. Read
34
20
0
13 Sep 2022
Forming Trees with Treeformers
Nilay Patel
Jeffrey Flanigan
AI4CE
21
3
0
14 Jul 2022
Zero-shot Cross-lingual Transfer is Under-specified Optimization
Shijie Wu
Benjamin Van Durme
Mark Dredze
25
6
0
12 Jul 2022
Learning Non-Autoregressive Models from Search for Unsupervised Sentence Summarization
Puyuan Liu
Chenyang Huang
Lili Mou
30
20
0
28 May 2022
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models
Kushal Tirumala
Aram H. Markosyan
Luke Zettlemoyer
Armen Aghajanyan
TDI
29
185
0
22 May 2022
UL2: Unifying Language Learning Paradigms
Yi Tay
Mostafa Dehghani
Vinh Q. Tran
Xavier Garcia
Jason W. Wei
...
Tal Schuster
H. Zheng
Denny Zhou
N. Houlsby
Donald Metzler
AI4CE
57
296
0
10 May 2022
Few-shot Mining of Naturally Occurring Inputs and Outputs
Mandar Joshi
Terra Blevins
M. Lewis
Daniel S. Weld
Luke Zettlemoyer
30
1
0
09 May 2022
Exploring the Role of Task Transferability in Large-Scale Multi-Task Learning
Vishakh Padmakumar
Leonard Lausen
Miguel Ballesteros
Sheng Zha
He He
George Karypis
25
18
0
23 Apr 2022
Incremental Few-Shot Learning via Implanting and Compressing
Yiting Li
H. Zhu
Xijia Feng
Zilong Cheng
Jun Ma
Cheng Xiang
P. Vadakkepat
T. Lee
CLL
VLM
21
2
0
19 Mar 2022
SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization
Mathieu Ravaut
Shafiq R. Joty
Nancy F. Chen
MoE
16
91
0
13 Mar 2022
NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better
Chuhan Wu
Fangzhao Wu
Tao Qi
Yongfeng Huang
Xing Xie
25
58
0
24 Feb 2022
When Do Flat Minima Optimizers Work?
Jean Kaddour
Linqing Liu
Ricardo M. A. Silva
Matt J. Kusner
ODL
18
58
0
01 Feb 2022
CM3: A Causal Masked Multimodal Model of the Internet
Armen Aghajanyan
Po-Yao (Bernie) Huang
Candace Ross
Vladimir Karpukhin
Hu Xu
...
Dmytro Okhonko
Mandar Joshi
Gargi Ghosh
M. Lewis
Luke Zettlemoyer
15
154
0
19 Jan 2022
Sharpness-Aware Minimization with Dynamic Reweighting
Wenxuan Zhou
Fangyu Liu
Huan Zhang
Muhao Chen
AAML
19
8
0
16 Dec 2021
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Runxin Xu
Fuli Luo
Chengyu Wang
Baobao Chang
Jun Huang
Songfang Huang
Fei Huang
VLM
27
25
0
14 Dec 2021
Sharpness-Aware Minimization Improves Language Model Generalization
Dara Bahri
H. Mobahi
Yi Tay
121
98
0
16 Oct 2021
Teaching Models new APIs: Domain-Agnostic Simulators for Task Oriented Dialogue
Moya Chen
Paul A. Crook
Stephen Roller
ALM
45
7
0
13 Oct 2021
Taming Sparsely Activated Transformer with Stochastic Experts
Simiao Zuo
Xiaodong Liu
Jian Jiao
Young Jin Kim
Hany Hassan
Ruofei Zhang
T. Zhao
Jianfeng Gao
MoE
39
108
0
08 Oct 2021
Robust fine-tuning of zero-shot models
Mitchell Wortsman
Gabriel Ilharco
Jong Wook Kim
Mike Li
Simon Kornblith
...
Raphael Gontijo-Lopes
Hannaneh Hajishirzi
Ali Farhadi
Hongseok Namkoong
Ludwig Schmidt
VLM
31
689
0
04 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
26
4
0
31 Aug 2021
Scheduled Sampling Based on Decoding Steps for Neural Machine Translation
Yijin Liu
Fandong Meng
Yufeng Chen
Jinan Xu
Jie Zhou
25
16
0
30 Aug 2021
Noise Stability Regularization for Improving BERT Fine-tuning
Hang Hua
Xingjian Li
Dejing Dou
Chengzhong Xu
Jiebo Luo
19
43
0
10 Jul 2021
Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards
S. Yadav
D. Gupta
Asma Ben Abacha
Dina Demner-Fushman
OffRL
14
34
0
01 Jul 2021
R-Drop: Regularized Dropout for Neural Networks
Xiaobo Liang
Lijun Wu
Juntao Li
Yue Wang
Qi Meng
Tao Qin
Wei Chen
M. Zhang
Tie-Yan Liu
47
424
0
28 Jun 2021
1
2
Next