ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.04720
  4. Cited By
Improving Neural Network Training in Low Dimensional Random Bases

Improving Neural Network Training in Low Dimensional Random Bases

9 November 2020
Frithjof Gressmann
Zach Eaton-Rosen
Carlo Luschi
ArXivPDFHTML

Papers citing "Improving Neural Network Training in Low Dimensional Random Bases"

18 / 18 papers shown
Title
Subspace Langevin Monte Carlo
Subspace Langevin Monte Carlo
Tyler Maunu
Jiayi Yao
103
0
0
18 Dec 2024
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
37
0
0
11 Nov 2024
PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuning
PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuning
Qibin Wang
Xiaolin Hu
Weikai Xu
Wei Liu
Jian Luan
Bin Wang
33
1
0
25 Sep 2024
Learning Scalable Model Soup on a Single GPU: An Efficient Subspace
  Training Strategy
Learning Scalable Model Soup on a Single GPU: An Efficient Subspace Training Strategy
Tao Li
Weisen Jiang
Fanghui Liu
X. Huang
James T. Kwok
MoMe
63
1
0
04 Jul 2024
Interpretability of Language Models via Task Spaces
Interpretability of Language Models via Task Spaces
Lucas Weber
Jaap Jumelet
Elia Bruni
Dieuwke Hupkes
37
4
0
10 Jun 2024
Does SGD really happen in tiny subspaces?
Does SGD really happen in tiny subspaces?
Minhak Song
Kwangjun Ahn
Chulhee Yun
71
5
1
25 May 2024
Towards Green AI: Current status and future research
Towards Green AI: Current status and future research
Christian Clemm
Lutz Stobbe
Kishan Wimalawarne
Jan Druschke
49
2
0
01 May 2024
Training-time Neuron Alignment through Permutation Subspace for
  Improving Linear Mode Connectivity and Model Fusion
Training-time Neuron Alignment through Permutation Subspace for Improving Linear Mode Connectivity and Model Fusion
Zexi Li
Zhiqi Li
Jie Lin
Tao Shen
Tao Lin
Chao Wu
41
4
0
02 Feb 2024
Identifying Policy Gradient Subspaces
Identifying Policy Gradient Subspaces
Jan Schneider-Barnes
Pierre Schumacher
Simon Guist
Tianyu Cui
Daniel Haeufle
Bernhard Scholkopf
Le Chen
41
5
0
12 Jan 2024
Enhancing Neural Training via a Correlated Dynamics Model
Enhancing Neural Training via a Correlated Dynamics Model
Jonathan Brokman
Roy Betser
Rotem Turjeman
Tom Berkov
I. Cohen
Guy Gilboa
24
3
0
20 Dec 2023
Deep Model Fusion: A Survey
Deep Model Fusion: A Survey
Weishi Li
Yong Peng
Miao Zhang
Liang Ding
Han Hu
Li Shen
FedML
MoMe
36
52
0
27 Sep 2023
Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific
  Subspaces of Pre-trained Language Models
Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific Subspaces of Pre-trained Language Models
Zhong Zhang
Bang Liu
Junming Shao
30
6
0
27 May 2023
PGrad: Learning Principal Gradients For Domain Generalization
PGrad: Learning Principal Gradients For Domain Generalization
Zhe Wang
J. E. Grigsby
Yanjun Qi
OOD
29
10
0
02 May 2023
Robust Federated Learning against both Data Heterogeneity and Poisoning
  Attack via Aggregation Optimization
Robust Federated Learning against both Data Heterogeneity and Poisoning Attack via Aggregation Optimization
Yueqi Xie
Weizhong Zhang
Renjie Pi
Fangzhao Wu
Qifeng Chen
Xing Xie
Sunghun Kim
FedML
25
7
0
10 Nov 2022
Trainable Weight Averaging: Accelerating Training and Improving Generalization
Trainable Weight Averaging: Accelerating Training and Improving Generalization
Tao Li
Zhehao Huang
Yingwen Wu
Zhengbao He
Qinghua Tao
X. Huang
Chih-Jen Lin
MoMe
52
0
0
26 May 2022
Kernel Modulation: A Parameter-Efficient Method for Training
  Convolutional Neural Networks
Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks
Yuhuang Hu
Shih-Chii Liu
23
0
0
29 Mar 2022
Subspace Adversarial Training
Subspace Adversarial Training
Tao Li
Yingwen Wu
Sizhe Chen
Kun Fang
Xiaolin Huang
AAML
OOD
44
56
0
24 Nov 2021
Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in
  Tiny Subspaces
Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces
Tao Li
Lei Tan
Qinghua Tao
Yipeng Liu
Xiaolin Huang
39
10
0
20 Mar 2021
1