ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.11274
  4. Cited By
Compression based bound for non-compressed network: unified
  generalization error analysis of large compressible deep neural network

Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network

25 September 2019
Taiji Suzuki
Hiroshi Abe
Tomoaki Nishimura
    AI4CE
ArXivPDFHTML

Papers citing "Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network"

14 / 14 papers shown
Title
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
94
2
0
08 Jul 2024
Generalization Guarantees via Algorithm-dependent Rademacher Complexity
Generalization Guarantees via Algorithm-dependent Rademacher Complexity
Sarah Sachs
T. Erven
Liam Hodgkinson
Rajiv Khanna
Umut Simsekli
25
3
0
04 Jul 2023
Proximity to Losslessly Compressible Parameters
Proximity to Losslessly Compressible Parameters
Matthew Farrugia-Roberts
30
0
0
05 Jun 2023
Koopman-based generalization bound: New aspect for full-rank weights
Koopman-based generalization bound: New aspect for full-rank weights
Yuka Hashimoto
Sho Sonoda
Isao Ishikawa
Atsushi Nitanda
Taiji Suzuki
11
2
0
12 Feb 2023
Generalization Bounds with Data-dependent Fractal Dimensions
Generalization Bounds with Data-dependent Fractal Dimensions
Benjamin Dupuis
George Deligiannidis
Umut cSimcsekli
AI4CE
39
12
0
06 Feb 2023
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
324
48
0
29 Sep 2022
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility
Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
46
10
0
17 May 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
42
48
0
09 Mar 2022
Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning
  Algorithms
Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms
Romain Chor
A. Gohari
Gaël Richard
Umut Simsekli
25
23
0
04 Mar 2022
Intrinsic Dimension, Persistent Homology and Generalization in Neural
  Networks
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
Tolga Birdal
Aaron Lou
Leonidas J. Guibas
Umut cSimcsekli
30
61
0
25 Nov 2021
Fractal Structure and Generalization Properties of Stochastic
  Optimization Algorithms
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
A. Camuto
George Deligiannidis
Murat A. Erdogdu
Mert Gurbuzbalaban
Umut cSimcsekli
Lingjiong Zhu
33
29
0
09 Jun 2021
Generalization bounds via distillation
Generalization bounds via distillation
Daniel J. Hsu
Ziwei Ji
Matus Telgarsky
Lan Wang
FedML
22
32
0
12 Apr 2021
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
A. Yaguchi
Taiji Suzuki
Shuhei Nitta
Y. Sakata
A. Tanizawa
19
9
0
29 Oct 2019
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
1