ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.05497
  4. Cited By
Improving Sharpness-Aware Minimization with Fisher Mask for Better
  Generalization on Language Models

Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models

11 October 2022
Qihuang Zhong
Liang Ding
Li Shen
Peng Mi
Juhua Liu
Bo Du
Dacheng Tao
    AAML
ArXiv (abs)PDFHTML

Papers citing "Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models"

9 / 9 papers shown
Title
Flatness-Aware Minimization for Domain Generalization
Flatness-Aware Minimization for Domain Generalization
Xingxuan Zhang
Renzhe Xu
Han Yu
Yancheng Dong
Pengfei Tian
Peng Cu
94
22
0
20 Jul 2023
The Crucial Role of Normalization in Sharpness-Aware Minimization
The Crucial Role of Normalization in Sharpness-Aware Minimization
Yan Dai
Kwangjun Ahn
S. Sra
130
19
0
24 May 2023
Towards the Flatter Landscape and Better Generalization in Federated
  Learning under Client-level Differential Privacy
Towards the Flatter Landscape and Better Generalization in Federated Learning under Client-level Differential Privacy
Yi Shi
Kang Wei
Li Shen
Yingqi Liu
Xueqian Wang
Bo Yuan
Dacheng Tao
FedML
97
3
0
01 May 2023
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning
  Rate and Momentum for Training Deep Neural Networks
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Hao Sun
Li Shen
Qihuang Zhong
Liang Ding
Shi-Yong Chen
Jingwei Sun
Jing Li
Guangzhong Sun
Dacheng Tao
98
34
0
01 Mar 2023
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
  Fine-tuned BERT
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
AI4MH
129
245
0
19 Feb 2023
Improving the Model Consistency of Decentralized Federated Learning
Improving the Model Consistency of Decentralized Federated Learning
Yi Shi
Li Shen
Kang Wei
Yan Sun
Bo Yuan
Xueqian Wang
Dacheng Tao
FedML
114
52
0
08 Feb 2023
Toward Efficient Language Model Pretraining and Downstream Adaptation
  via Self-Evolution: A Case Study on SuperGLUE
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Qihuang Zhong
Liang Ding
Yibing Zhan
Yu Qiao
Yonggang Wen
...
Yixin Chen
Xinbo Gao
Steven C. H. Hoi
Xiaoou Tang
Dacheng Tao
VLMELM
124
35
0
04 Dec 2022
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model
  Adaptation
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
VLMCLL
94
44
0
22 Aug 2022
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language
  Understanding and Generation
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
114
27
0
30 May 2022
1