ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.06227
  4. Cited By
Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing

Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing

17 January 2022
Yiding Wang
D. Sun
Kai Chen
Fan Lai
Mosharaf Chowdhury
ArXivPDFHTML

Papers citing "Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing"

11 / 11 papers shown
Title
Budgeted Online Continual Learning by Adaptive Layer Freezing and Frequency-based Sampling
Budgeted Online Continual Learning by Adaptive Layer Freezing and Frequency-based Sampling
Minhyuk Seo
Hyunseo Koh
Jonghyun Choi
39
1
0
19 Oct 2024
Breaking the Memory Wall for Heterogeneous Federated Learning via Progressive Training
Breaking the Memory Wall for Heterogeneous Federated Learning via Progressive Training
Yebo Wu
Li Li
Chengzhong Xu
FedML
41
13
0
20 Apr 2024
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Jihao Xin
Ivan Ilin
Shunkang Zhang
Marco Canini
Peter Richtárik
40
3
0
13 Dec 2023
Towards Efficient Fine-tuning of Pre-trained Code Models: An
  Experimental Study and Beyond
Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond
Ensheng Shi
Yanlin Wang
Hongyu Zhang
Lun Du
Shi Han
Dongmei Zhang
Hongbin Sun
36
42
0
11 Apr 2023
Comparison between layer-to-layer network training and conventional
  network training using Deep Convolutional Neural Networks
Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks
Kiran Kumar Ashish Bhyravabhottla
WonSook Lee
14
0
0
27 Mar 2023
TACC: A Full-stack Cloud Computing Infrastructure for Machine Learning
  Tasks
TACC: A Full-stack Cloud Computing Infrastructure for Machine Learning Tasks
Kaiqiang Xu
Xinchen Wan
Hao Wang
Zhenghang Ren
Xudong Liao
D. Sun
Chaoliang Zeng
Kai Chen
57
31
0
04 Oct 2021
Distilling Linguistic Context for Language Model Compression
Distilling Linguistic Context for Language Model Compression
Geondo Park
Gyeongman Kim
Eunho Yang
48
38
0
17 Sep 2021
Skip-Convolutions for Efficient Video Processing
Skip-Convolutions for Efficient Video Processing
A. Habibian
Davide Abati
Taco S. Cohen
B. Bejnordi
54
50
0
23 Apr 2021
Carbon Emissions and Large Neural Network Training
Carbon Emissions and Large Neural Network Training
David A. Patterson
Joseph E. Gonzalez
Quoc V. Le
Chen Liang
Lluís-Miquel Munguía
D. Rothchild
David R. So
Maud Texier
J. Dean
AI4CE
253
645
0
21 Apr 2021
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
271
5,327
0
05 Nov 2016
1