ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.02668
  4. Cited By
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons

On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons

5 December 2021
Fangshuo Liao
Anastasios Kyrillidis
ArXivPDFHTML

Papers citing "On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons"

13 / 13 papers shown
Title
Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable?
Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable?
Maxime Méloux
Silviu Maniu
François Portet
Maxime Peyrard
42
0
0
28 Feb 2025
FedP3: Federated Personalized and Privacy-friendly Network Pruning under
  Model Heterogeneity
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity
Kai Yi
Nidham Gazagnadou
Peter Richtárik
Lingjuan Lyu
79
11
0
15 Apr 2024
Federated Learning Over Images: Vertical Decompositions and Pre-Trained
  Backbones Are Difficult to Beat
Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat
Erdong Hu
Yu-Shuen Tang
Anastasios Kyrillidis
C. Jermaine
FedML
36
10
0
06 Sep 2023
Towards a Better Theoretical Understanding of Independent Subnetwork
  Training
Towards a Better Theoretical Understanding of Independent Subnetwork Training
Egor Shulgin
Peter Richtárik
AI4CE
28
6
0
28 Jun 2023
MIRACLE: Multi-task Learning based Interpretable Regulation of
  Autoimmune Diseases through Common Latent Epigenetics
MIRACLE: Multi-task Learning based Interpretable Regulation of Autoimmune Diseases through Common Latent Epigenetics
Pengcheng Xu
Jinpu Cai
Yulin Gao
Ziqi Rong
AI4CE
18
0
0
24 Jun 2023
Xtreme Margin: A Tunable Loss Function for Binary Classification
  Problems
Xtreme Margin: A Tunable Loss Function for Binary Classification Problems
Rayan Wali
MQ
20
3
0
31 Oct 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
27
3
0
28 Oct 2022
Efficient and Light-Weight Federated Learning via Asynchronous
  Distributed Dropout
Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout
Chen Dun
Mirian Hipolito Garcia
C. Jermaine
Dimitrios Dimitriadis
Anastasios Kyrillidis
64
20
0
28 Oct 2022
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
Hongru Yang
Zhangyang Wang
MLT
32
8
0
27 Mar 2022
Masked Training of Neural Networks with Partial Gradients
Masked Training of Neural Networks with Partial Gradients
Amirkeivan Mohtashami
Martin Jaggi
Sebastian U. Stich
15
22
0
16 Jun 2021
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
Cameron R. Wolfe
Jingkang Yang
Arindam Chowdhury
Chen Dun
Artun Bayer
Santiago Segarra
Anastasios Kyrillidis
BDL
GNN
LRM
51
9
0
20 Feb 2021
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
43
48
0
24 Jan 2021
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,145
0
06 Jun 2015
1