Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.13741
Cited By
Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation
25 October 2022
A. Marcianò
De-Wei Chen
Filippo Fabrocini
C. Fields
M. Lulli
Emanuele Zappala
GNN
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation"
21 / 21 papers shown
Title
Deep learning: a statistical viewpoint
Peter L. Bartlett
Andrea Montanari
Alexander Rakhlin
55
276
0
16 Mar 2021
Understanding Generalization in Deep Learning via Tensor Methods
Jingling Li
Yanchao Sun
Jiahao Su
Taiji Suzuki
Furong Huang
68
28
0
14 Jan 2020
Efficient Learning for Deep Quantum Neural Networks
Kerstin Beer
Dmytro Bondarenko
Terry Farrelly
T. Osborne
Robert Salzmann
Ramona Wolf
61
560
0
27 Feb 2019
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
227
1,647
0
28 Dec 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
225
3,463
0
09 Mar 2018
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
60
418
0
05 Feb 2018
Theory of Deep Learning III: explaining the non-overfitting puzzle
T. Poggio
Kenji Kawaguchi
Q. Liao
Brando Miranda
Lorenzo Rosasco
Xavier Boix
Jack Hidary
H. Mhaskar
ODL
55
128
0
30 Dec 2017
Deep Neural Network Capacity
Aosen Wang
Huan Zhou
Wenyao Xu
Xin Chen
20
4
0
16 Aug 2017
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
Behnam Neyshabur
Srinadh Bhojanapalli
Nathan Srebro
80
606
0
29 Jul 2017
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
Lei Wu
Zhanxing Zhu
E. Weinan
ODL
62
221
0
30 Jun 2017
Exploring Generalization in Deep Learning
Behnam Neyshabur
Srinadh Bhojanapalli
David A. McAllester
Nathan Srebro
FAtt
146
1,255
0
27 Jun 2017
A Closer Look at Memorization in Deep Networks
Devansh Arpit
Stanislaw Jastrzebski
Nicolas Ballas
David M. Krueger
Emmanuel Bengio
...
Tegan Maharaj
Asja Fischer
Aaron Courville
Yoshua Bengio
Simon Lacoste-Julien
TDI
120
1,816
0
16 Jun 2017
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
Elad Hoffer
Itay Hubara
Daniel Soudry
ODL
169
799
0
24 May 2017
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
Gintare Karolina Dziugaite
Daniel M. Roy
106
813
0
31 Mar 2017
Sharp Minima Can Generalize For Deep Nets
Laurent Dinh
Razvan Pascanu
Samy Bengio
Yoshua Bengio
ODL
112
772
0
15 Mar 2017
Opening the Black Box of Deep Neural Networks via Information
Ravid Shwartz-Ziv
Naftali Tishby
AI4CE
98
1,408
0
02 Mar 2017
Generalization and Equilibrium in Generative Adversarial Nets (GANs)
Sanjeev Arora
Rong Ge
Yingyu Liang
Tengyu Ma
Yi Zhang
GAN
54
688
0
02 Mar 2017
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
336
4,625
0
10 Nov 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
421
2,936
0
15 Sep 2016
Why does deep and cheap learning work so well?
Henry W. Lin
Max Tegmark
David Rolnick
72
607
0
29 Aug 2016
In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
AI4CE
90
657
0
20 Dec 2014
1