ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.08899
  4. Cited By
Training Recommender Systems at Scale: Communication-Efficient Model and
  Data Parallelism

Training Recommender Systems at Scale: Communication-Efficient Model and Data Parallelism

18 October 2020
Vipul Gupta
Dhruv Choudhary
P. T. P. Tang
Xiaohan Wei
Xing Wang
Yuzhen Huang
A. Kejariwal
Kannan Ramchandran
Michael W. Mahoney
ArXivPDFHTML

Papers citing "Training Recommender Systems at Scale: Communication-Efficient Model and Data Parallelism"

7 / 7 papers shown
Title
Integrating LLMs with ITS: Recent Advances, Potentials, Challenges, and Future Directions
Integrating LLMs with ITS: Recent Advances, Potentials, Challenges, and Future Directions
Doaa Mahmud
Hadeel Hajmohamed
Shamma Almentheri
Shamma Alqaydi
Lameya Aldhaheri
R. A. Khalil
Nasir Saeed
AI4TS
51
5
0
08 Jan 2025
GraphScale: A Framework to Enable Machine Learning over Billion-node
  Graphs
GraphScale: A Framework to Enable Machine Learning over Billion-node Graphs
Vipul Gupta
Xin Chen
Ruoyun Huang
Fanlong Meng
Jianjun Chen
Yujun Yan
GNN
41
0
0
22 Jul 2024
Merlin HugeCTR: GPU-accelerated Recommender System Training and
  Inference
Merlin HugeCTR: GPU-accelerated Recommender System Training and Inference
Zehuan Wang
Yingcan Wei
Minseok Lee
Matthias Langer
F. Yu
...
Daniel G. Abel
Xu Guo
Jianbing Dong
Ji Shi
Kunlun Li
GNN
LRM
25
32
0
17 Oct 2022
BagPipe: Accelerating Deep Recommendation Model Training
BagPipe: Accelerating Deep Recommendation Model Training
Saurabh Agarwal
Chengpo Yan
Ziyi Zhang
Shivaram Venkataraman
37
18
0
24 Feb 2022
A Machine Learning Framework for Distributed Functional Compression over
  Wireless Channels in IoT
A Machine Learning Framework for Distributed Functional Compression over Wireless Channels in IoT
Yashas Malur Saidutta
Afshin Abdi
Faramarz Fekri
AI4CE
38
4
0
24 Jan 2022
Training Large-Scale News Recommenders with Pretrained Language Models
  in the Loop
Training Large-Scale News Recommenders with Pretrained Language Models in the Loop
Shitao Xiao
Zheng Liu
Yingxia Shao
Tao Di
Xing Xie
VLM
AIFin
127
41
0
18 Feb 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
312
2,896
0
15 Sep 2016
1