ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.05500
  4. Cited By
Communication-Efficient TeraByte-Scale Model Training Framework for
  Online Advertising

Communication-Efficient TeraByte-Scale Model Training Framework for Online Advertising

5 January 2022
Weijie Zhao
Xuewu Jiao
Mingqing Hu
Xiaoyun Li
Xinming Zhang
Ping Li
    3DV
ArXivPDFHTML

Papers citing "Communication-Efficient TeraByte-Scale Model Training Framework for Online Advertising"

8 / 8 papers shown
Title
Disaggregated Multi-Tower: Topology-aware Modeling Technique for
  Efficient Large-Scale Recommendation
Disaggregated Multi-Tower: Topology-aware Modeling Technique for Efficient Large-Scale Recommendation
Liang Luo
Buyun Zhang
Michael Tsang
Yinbin Ma
Ching-Hsiang Chu
...
Guna Lakshminarayanan
Ellie Wen
Jongsoo Park
Dheevatsa Mudigere
Maxim Naumov
43
4
0
01 Mar 2024
Analysis of Error Feedback in Federated Non-Convex Optimization with
  Biased Compression
Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression
Xiaoyun Li
Ping Li
FedML
34
4
0
25 Nov 2022
Package for Fast ABC-Boost
Package for Fast ABC-Boost
Ping Li
Weijie Zhao
25
6
0
18 Jul 2022
On Convergence of FedProx: Local Dissimilarity Invariant Bounds,
  Non-smoothness and Beyond
On Convergence of FedProx: Local Dissimilarity Invariant Bounds, Non-smoothness and Beyond
Xiao-Tong Yuan
P. Li
FedML
18
58
0
10 Jun 2022
On Distributed Adaptive Optimization with Gradient Compression
On Distributed Adaptive Optimization with Gradient Compression
Xiaoyun Li
Belhal Karimi
Ping Li
15
25
0
11 May 2022
On the Convergence of Decentralized Adaptive Gradient Methods
On the Convergence of Decentralized Adaptive Gradient Methods
Xiangyi Chen
Belhal Karimi
Weijie Zhao
Ping Li
21
21
0
07 Sep 2021
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep
  Learning Ads Systems
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems
Weijie Zhao
Deping Xie
Ronglai Jia
Yulei Qian
Rui Ding
Mingming Sun
P. Li
MoE
59
150
0
12 Mar 2020
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural
  Network Training
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training
Liang Luo
Jacob Nelson
Luis Ceze
Amar Phanishayee
Arvind Krishnamurthy
67
120
0
21 May 2018
1