ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.16703
  4. Cited By
An Efficient Split Fine-tuning Framework for Edge and Cloud
  Collaborative Learning

An Efficient Split Fine-tuning Framework for Edge and Cloud Collaborative Learning

30 November 2022
Shaoshuai Shi
Qing Yang
Yang Xiang
Shuhan Qi
Xinyu Wang
ArXivPDFHTML

Papers citing "An Efficient Split Fine-tuning Framework for Edge and Cloud Collaborative Learning"

9 / 9 papers shown
Title
Nebula-I: A General Framework for Collaboratively Training Deep Learning
  Models on Low-Bandwidth Cloud Clusters
Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Yang Xiang
Zhihua Wu
Weibao Gong
Siyu Ding
Xianjie Mo
...
Yue Yu
Ge Li
Yu Sun
Yanjun Ma
Dianhai Yu
55
5
0
19 May 2022
PaLM: Scaling Language Modeling with Pathways
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILM
LRM
471
6,231
0
05 Apr 2022
Structured Pruning Learns Compact and Accurate Models
Structured Pruning Learns Compact and Accurate Models
Mengzhou Xia
Zexuan Zhong
Danqi Chen
VLM
51
184
0
01 Apr 2022
Efficient Large-Scale Language Model Training on GPU Clusters Using
  Megatron-LM
Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM
Deepak Narayanan
Mohammad Shoeybi
Jared Casper
P. LeGresley
M. Patwary
...
Prethvi Kashinkunti
J. Bernauer
Bryan Catanzaro
Amar Phanishayee
Matei A. Zaharia
MoE
105
688
0
09 Apr 2021
SplitEasy: A Practical Approach for Training ML models on Mobile Devices
SplitEasy: A Practical Approach for Training ML models on Mobile Devices
Kamalesh Palanisamy
Vivek Khimani
Moin Hussain Moti
Dimitris Chatzopoulos
50
20
0
09 Nov 2020
Compressing deep neural networks by matrix product operators
Compressing deep neural networks by matrix product operators
Ze-Feng Gao
Song Cheng
Rong-Qiang He
Z. Xie
Hui-Hai Zhao
Zhong-Yi Lu
Tao Xiang
63
38
0
11 Apr 2019
Split learning for health: Distributed deep learning without sharing raw
  patient data
Split learning for health: Distributed deep learning without sharing raw patient data
Praneeth Vepakomma
O. Gupta
Tristan Swedish
Ramesh Raskar
FedML
118
707
0
03 Dec 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
1.1K
7,154
0
20 Apr 2018
SQuAD: 100,000+ Questions for Machine Comprehension of Text
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
RALM
274
8,127
0
16 Jun 2016
1