ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.17174
  4. Cited By
FedZeN: Towards superlinear zeroth-order federated learning via
  incremental Hessian estimation

FedZeN: Towards superlinear zeroth-order federated learning via incremental Hessian estimation

29 September 2023
A. Maritan
S. Dey
Luca Schenato
    FedML
ArXiv (abs)PDFHTML

Papers citing "FedZeN: Towards superlinear zeroth-order federated learning via incremental Hessian estimation"

10 / 10 papers shown
Title
Does Federated Learning Really Need Backpropagation?
Does Federated Learning Really Need Backpropagation?
Hao Feng
Tianyu Pang
Chao Du
Wei Chen
Shuicheng Yan
Min Lin
FedML
68
11
0
28 Jan 2023
Stochastic Zeroth Order Gradient and Hessian Estimators: Variance
  Reduction and Refined Bias Bounds
Stochastic Zeroth Order Gradient and Hessian Estimators: Variance Reduction and Refined Bias Bounds
Yasong Feng
Tianyu Wang
35
13
0
29 May 2022
Desirable Companion for Vertical Federated Learning: New Zeroth-Order
  Gradient Based Algorithm
Desirable Companion for Vertical Federated Learning: New Zeroth-Order Gradient Based Algorithm
Qingsong Zhang
Bin Gu
Zhiyuan Dang
Cheng Deng
Heng-Chiao Huang
FedML
93
15
0
19 Mar 2022
SHED: A Newton-type algorithm for federated learning based on
  incremental Hessian eigenvector sharing
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
Nicolò Dal Fabbro
S. Dey
M. Rossi
Luca Schenato
FedML
94
15
0
11 Feb 2022
Communication-Efficient Stochastic Zeroth-Order Optimization for
  Federated Learning
Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning
Wenzhi Fang
Ziyi Yu
Yuning Jiang
Yuanming Shi
Colin N. Jones
Yong Zhou
FedML
111
60
0
24 Jan 2022
FedNL: Making Newton-Type Methods Applicable to Federated Learning
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
71
80
0
05 Jun 2021
A Primer on Zeroth-Order Optimization in Signal Processing and Machine
  Learning
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
Sijia Liu
Pin-Yu Chen
B. Kailkhura
Gaoyuan Zhang
A. Hero III
P. Varshney
72
232
0
11 Jun 2020
Inverting Gradients -- How easy is it to break privacy in federated
  learning?
Inverting Gradients -- How easy is it to break privacy in federated learning?
Jonas Geiping
Hartmut Bauermeister
Hannah Dröge
Michael Moeller
FedML
109
1,232
0
31 Mar 2020
Exact and Inexact Subsampled Newton Methods for Optimization
Exact and Inexact Subsampled Newton Methods for Optimization
Raghu Bollapragada
R. Byrd
J. Nocedal
64
181
0
27 Sep 2016
Communication-Efficient Learning of Deep Networks from Decentralized
  Data
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
406
17,486
0
17 Feb 2016
1