ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.02930
  4. Cited By
Nearly-tight VC-dimension and pseudodimension bounds for piecewise
  linear neural networks

Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks

8 March 2017
Peter L. Bartlett
Nick Harvey
Christopher Liaw
Abbas Mehrabian
ArXivPDFHTML

Papers citing "Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks"

50 / 125 papers shown
Title
VC dimensions of group convolutional neural networks
VC dimensions of group convolutional neural networks
P. Petersen
A. Sepliarskaia
VLM
34
7
0
19 Dec 2022
Nonlinear Advantage: Trained Networks Might Not Be As Complex as You
  Think
Nonlinear Advantage: Trained Networks Might Not Be As Complex as You Think
Christian H. X. Ali Mehmeti-Göpel
Jan Disselhoff
20
5
0
30 Nov 2022
Limitations on approximation by deep and shallow neural networks
Limitations on approximation by deep and shallow neural networks
G. Petrova
P. Wojtaszczyk
46
7
0
30 Nov 2022
Instance-Dependent Generalization Bounds via Optimal Transport
Instance-Dependent Generalization Bounds via Optimal Transport
Songyan Hou
Parnian Kassraie
Anastasis Kratsios
Andreas Krause
Jonas Rothfuss
41
6
0
02 Nov 2022
Is Out-of-Distribution Detection Learnable?
Is Out-of-Distribution Detection Learnable?
Zhen Fang
Yixuan Li
Jie Lu
Jiahua Dong
Bo Han
Feng Liu
OODD
52
125
0
26 Oct 2022
The Curious Case of Benign Memorization
The Curious Case of Benign Memorization
Sotiris Anagnostidis
Gregor Bachmann
Lorenzo Noci
Thomas Hofmann
AAML
54
9
0
25 Oct 2022
Designing Universal Causal Deep Learning Models: The Case of Infinite-Dimensional Dynamical Systems from Stochastic Analysis
Designing Universal Causal Deep Learning Models: The Case of Infinite-Dimensional Dynamical Systems from Stochastic Analysis
Luca Galimberti
Anastasis Kratsios
Giulia Livieri
OOD
35
14
0
24 Oct 2022
Why neural networks find simple solutions: the many regularizers of
  geometric complexity
Why neural networks find simple solutions: the many regularizers of geometric complexity
Benoit Dherin
Michael Munn
M. Rosca
David Barrett
65
31
0
27 Sep 2022
Improving Self-Supervised Learning by Characterizing Idealized
  Representations
Improving Self-Supervised Learning by Characterizing Idealized Representations
Yann Dubois
Tatsunori Hashimoto
Stefano Ermon
Percy Liang
SSL
86
41
0
13 Sep 2022
On the generalization of learning algorithms that do not converge
On the generalization of learning algorithms that do not converge
N. Chandramoorthy
Andreas Loukas
Khashayar Gatmiry
Stefanie Jegelka
MLT
52
11
0
16 Aug 2022
Large Language Models and the Reverse Turing Test
Large Language Models and the Reverse Turing Test
T. Sejnowski
ELM
41
107
0
28 Jul 2022
Deep Sufficient Representation Learning via Mutual Information
Deep Sufficient Representation Learning via Mutual Information
Siming Zheng
Yuanyuan Lin
Jian Huang
SSL
DRL
56
0
0
21 Jul 2022
Benefits of Additive Noise in Composing Classes with Bounded Capacity
Benefits of Additive Noise in Composing Classes with Bounded Capacity
A. F. Pour
H. Ashtiani
48
3
0
14 Jun 2022
A general approximation lower bound in $L^p$ norm, with applications to
  feed-forward neural networks
A general approximation lower bound in LpL^pLp norm, with applications to feed-forward neural networks
El Mehdi Achour
Armand Foucault
Sébastien Gerchinovitz
Franccois Malgouyres
44
7
0
09 Jun 2022
Why Robust Generalization in Deep Learning is Difficult: Perspective of
  Expressive Power
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power
Binghui Li
Jikai Jin
Han Zhong
John E. Hopcroft
Liwei Wang
OOD
87
27
0
27 May 2022
Learning ReLU networks to high uniform accuracy is intractable
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
46
4
0
26 May 2022
Analysis of convolutional neural network image classifiers in a
  rotationally symmetric model
Analysis of convolutional neural network image classifiers in a rotationally symmetric model
Michael Kohler
Benjamin Kohler
43
5
0
11 May 2022
How do noise tails impact on deep ReLU networks?
How do noise tails impact on deep ReLU networks?
Jianqing Fan
Yihong Gu
Wen-Xin Zhou
ODL
50
13
0
20 Mar 2022
Simultaneous Learning of the Inputs and Parameters in Neural
  Collaborative Filtering
Simultaneous Learning of the Inputs and Parameters in Neural Collaborative Filtering
Ramin Raziperchikolaei
Young-joo Chung
32
2
0
14 Mar 2022
Estimating a regression function in exponential families by model
  selection
Estimating a regression function in exponential families by model selection
Juntong Chen
41
2
0
13 Mar 2022
Generalization Through The Lens Of Leave-One-Out Error
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurelien Lucchi
97
7
0
07 Mar 2022
Designing Universal Causal Deep Learning Models: The Geometric
  (Hyper)Transformer
Designing Universal Causal Deep Learning Models: The Geometric (Hyper)Transformer
Beatrice Acciaio
Anastasis Kratsios
G. Pammer
OOD
61
20
0
31 Jan 2022
Deep Nonparametric Estimation of Operators between Infinite Dimensional
  Spaces
Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces
Hao Liu
Haizhao Yang
Minshuo Chen
T. Zhao
Wenjing Liao
54
37
0
01 Jan 2022
Neural networks with linear threshold activations: structure and
  algorithms
Neural networks with linear threshold activations: structure and algorithms
Sammy Khalife
Hongyu Cheng
A. Basu
52
16
0
15 Nov 2021
On the Equivalence between Neural Network and Support Vector Machine
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen
Wei Huang
Lam M. Nguyen
Tsui-Wei Weng
AAML
35
18
0
11 Nov 2021
Improved Regularization and Robustness for Fine-tuning in Neural
  Networks
Improved Regularization and Robustness for Fine-tuning in Neural Networks
Dongyue Li
Hongyang R. Zhang
NoLa
55
56
0
08 Nov 2021
Improving Generalization Bounds for VC Classes Using the Hypergeometric
  Tail Inversion
Improving Generalization Bounds for VC Classes Using the Hypergeometric Tail Inversion
Jean-Samuel Leboeuf
F. Leblanc
M. Marchand
17
0
0
29 Oct 2021
Provable Lifelong Learning of Representations
Provable Lifelong Learning of Representations
Xinyuan Cao
Weiyang Liu
Santosh Vempala
CLL
44
14
0
27 Oct 2021
A Deep Generative Approach to Conditional Sampling
A Deep Generative Approach to Conditional Sampling
Xingyu Zhou
Yuling Jiao
Jin Liu
Jian Huang
20
42
0
19 Oct 2021
VC dimension of partially quantized neural networks in the
  overparametrized regime
VC dimension of partially quantized neural networks in the overparametrized regime
Yutong Wang
Clayton D. Scott
59
1
0
06 Oct 2021
Learning the hypotheses space from data through a U-curve algorithm
Learning the hypotheses space from data through a U-curve algorithm
Diego Marcondes
Adilson Simonis
Junior Barrera
44
1
0
08 Sep 2021
Robust Nonparametric Regression with Deep Neural Networks
Robust Nonparametric Regression with Deep Neural Networks
Guohao Shen
Yuling Jiao
Yuanyuan Lin
Jian Huang
OOD
67
13
0
21 Jul 2021
Learning from scarce information: using synthetic data to classify Roman
  fine ware pottery
Learning from scarce information: using synthetic data to classify Roman fine ware pottery
Santos J. Núñez Jareño
Daniël P. van Helden
Evgeny M. Mirkes
I. Tyukin
Penelope Allison
42
5
0
03 Jul 2021
Neural Network Layer Algebra: A Framework to Measure Capacity and
  Compression in Deep Learning
Neural Network Layer Algebra: A Framework to Measure Capacity and Compression in Deep Learning
Alberto Badías
A. Banerjee
34
3
0
02 Jul 2021
Deep Generative Learning via Schrödinger Bridge
Deep Generative Learning via Schrödinger Bridge
Gefei Wang
Yuling Jiao
Qiang Xu
Yang Wang
Can Yang
DiffM
OT
33
94
0
19 Jun 2021
What can linearized neural networks actually say about generalization?
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
41
44
0
12 Jun 2021
Quantifying and Improving Transferability in Domain Generalization
Quantifying and Improving Transferability in Domain Generalization
Guojun Zhang
Han Zhao
Yaoliang Yu
Pascal Poupart
58
37
0
07 Jun 2021
Sharp bounds for the number of regions of maxout networks and vertices
  of Minkowski sums
Sharp bounds for the number of regions of maxout networks and vertices of Minkowski sums
Guido Montúfar
Yue Ren
Leon Zhang
30
40
0
16 Apr 2021
Deep Nonparametric Regression on Approximate Manifolds: Non-Asymptotic
  Error Bounds with Polynomial Prefactors
Deep Nonparametric Regression on Approximate Manifolds: Non-Asymptotic Error Bounds with Polynomial Prefactors
Yuling Jiao
Guohao Shen
Yuanyuan Lin
Jian Huang
76
50
0
14 Apr 2021
Generalization bounds via distillation
Generalization bounds via distillation
Daniel J. Hsu
Ziwei Ji
Matus Telgarsky
Lan Wang
FedML
39
33
0
12 Apr 2021
Proof of the Theory-to-Practice Gap in Deep Learning via Sampling
  Complexity bounds for Neural Network Approximation Spaces
Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces
Philipp Grohs
F. Voigtlaender
49
36
0
06 Apr 2021
Fast Jacobian-Vector Product for Deep Networks
Fast Jacobian-Vector Product for Deep Networks
Randall Balestriero
Richard Baraniuk
41
4
0
01 Apr 2021
Quantitative approximation results for complex-valued neural networks
Quantitative approximation results for complex-valued neural networks
A. Caragea
D. Lee
J. Maly
G. Pfander
F. Voigtlaender
23
5
0
25 Feb 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for
  Deep ReLU Networks
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
35
82
0
21 Dec 2020
Computational Separation Between Convolutional and Fully-Connected
  Networks
Computational Separation Between Convolutional and Fully-Connected Networks
Eran Malach
Shai Shalev-Shwartz
37
26
0
03 Oct 2020
The Kolmogorov-Arnold representation theorem revisited
The Kolmogorov-Arnold representation theorem revisited
Johannes Schmidt-Hieber
35
130
0
31 Jul 2020
The Interpolation Phase Transition in Neural Networks: Memorization and
  Generalization under Lazy Training
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
Andrea Montanari
Yiqiao Zhong
66
95
0
25 Jul 2020
Approximation in shift-invariant spaces with deep ReLU neural networks
Approximation in shift-invariant spaces with deep ReLU neural networks
Yunfei Yang
Zhen Li
Yang Wang
41
14
0
25 May 2020
Learning the gravitational force law and other analytic functions
Learning the gravitational force law and other analytic functions
Atish Agarwala
Abhimanyu Das
Rina Panigrahy
Qiuyi Zhang
MLT
25
0
0
15 May 2020
On Deep Instrumental Variables Estimate
On Deep Instrumental Variables Estimate
Ruiqi Liu
Zuofeng Shang
Guang Cheng
31
26
0
30 Apr 2020
Previous
123
Next