ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.00294
  4. Cited By
Statistical Guarantees for Regularized Neural Networks

Statistical Guarantees for Regularized Neural Networks

30 May 2020
Mahsa Taheri
Fang Xie
Johannes Lederer
ArXivPDFHTML

Papers citing "Statistical Guarantees for Regularized Neural Networks"

12 / 12 papers shown
Title
Regularization can make diffusion models more efficient
Regularization can make diffusion models more efficient
Mahsa Taheri
Johannes Lederer
100
0
0
13 Feb 2025
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom
Sangyoon Lee
Jaeho Lee
61
2
0
07 Oct 2024
Better Representations via Adversarial Training in Pre-Training: A
  Theoretical Perspective
Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective
Yue Xing
Xiaofeng Lin
Qifan Song
Yi Tian Xu
Belinda Zeng
Guang Cheng
SSL
26
0
0
26 Jan 2024
Statistical learning by sparse deep neural networks
Statistical learning by sparse deep neural networks
Felix Abramovich
BDL
24
1
0
15 Nov 2023
Statistical guarantees for sparse deep learning
Statistical guarantees for sparse deep learning
Johannes Lederer
19
11
0
11 Dec 2022
Statistical Guarantees for Approximate Stationary Points of Simple
  Neural Networks
Statistical Guarantees for Approximate Stationary Points of Simple Neural Networks
Mahsa Taheri
Fang Xie
Johannes Lederer
31
0
0
09 May 2022
A PAC-Bayes oracle inequality for sparse neural networks
A PAC-Bayes oracle inequality for sparse neural networks
Maximilian F. Steffen
Mathias Trabs
UQCV
21
2
0
26 Apr 2022
Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite
  Variance Assumption
Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite Variance Assumption
Lihu Xu
Fang Yao
Qiuran Yao
Huiming Zhang
38
10
0
10 Jan 2022
Function approximation by deep neural networks with parameters $\{0,\pm
  \frac{1}{2}, \pm 1, 2\}$
Function approximation by deep neural networks with parameters {0,±12,±1,2}\{0,\pm \frac{1}{2}, \pm 1, 2\}{0,±21​,±1,2}
A. Beknazaryan
18
5
0
15 Mar 2021
Layer Sparsity in Neural Networks
Layer Sparsity in Neural Networks
Mohamed Hebiri
Johannes Lederer
36
10
0
28 Jun 2020
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image
  Segmentation
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
Vijay Badrinarayanan
Alex Kendall
R. Cipolla
SSeg
448
15,652
0
02 Nov 2015
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
1