ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.08737
  4. Cited By
Distributed Newton Can Communicate Less and Resist Byzantine Workers

Distributed Newton Can Communicate Less and Resist Byzantine Workers

15 June 2020
Avishek Ghosh
R. Maity
A. Mazumdar
    FedML
ArXivPDFHTML

Papers citing "Distributed Newton Can Communicate Less and Resist Byzantine Workers"

8 / 8 papers shown
Title
Communication Compression for Byzantine Robust Learning: New Efficient
  Algorithms and Improved Rates
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
50
5
0
15 Oct 2023
Fundamental Limits of Distributed Optimization over Multiple Access
  Channel
Fundamental Limits of Distributed Optimization over Multiple Access Channel
Shubham K. Jha
36
1
0
05 Oct 2023
FedREP: A Byzantine-Robust, Communication-Efficient and
  Privacy-Preserving Framework for Federated Learning
FedREP: A Byzantine-Robust, Communication-Efficient and Privacy-Preserving Framework for Federated Learning
Yi-Rui Yang
Kun Wang
Wulu Li
FedML
52
3
0
09 Mar 2023
Robust Distributed Learning Against Both Distributional Shifts and
  Byzantine Attacks
Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks
Guanqiang Zhou
Ping Xu
Yue Wang
Zhi Tian
OOD
FedML
39
4
0
29 Oct 2022
Distributed Newton-Type Methods with Communication Compression and
  Bernoulli Aggregation
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
45
16
0
07 Jun 2022
Collaborative Linear Bandits with Adversarial Agents: Near-Optimal
  Regret Bounds
Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds
A. Mitra
Arman Adibi
George J. Pappas
Hamed Hassani
46
6
0
06 Jun 2022
Over-the-Air Federated Learning via Second-Order Optimization
Over-the-Air Federated Learning via Second-Order Optimization
Peng Yang
Yuning Jiang
Ting Wang
Yong Zhou
Yuanming Shi
Colin N. Jones
50
28
0
29 Mar 2022
Fundamental limits of over-the-air optimization: Are analog schemes
  optimal?
Fundamental limits of over-the-air optimization: Are analog schemes optimal?
Shubham K. Jha
Prathamesh Mayekar
Himanshu Tyagi
29
7
0
11 Sep 2021
1