ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.06444
  4. Cited By
Unreasonable Effectiveness of Learning Neural Networks: From Accessible
  States and Robust Ensembles to Basic Algorithmic Schemes

Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes

20 May 2016
Carlo Baldassi
C. Borgs
J. Chayes
Alessandro Ingrosso
C. Lucibello
Luca Saglietti
R. Zecchina
ArXivPDFHTML

Papers citing "Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes"

24 / 24 papers shown
Title
Entropy-Guided Sampling of Flat Modes in Discrete Spaces
Entropy-Guided Sampling of Flat Modes in Discrete Spaces
Pinaki Mohanty
Riddhiman Bhattacharya
Ruqi Zhang
212
0
0
05 May 2025
High-dimensional manifold of solutions in neural networks: insights from statistical physics
High-dimensional manifold of solutions in neural networks: insights from statistical physics
Enrico M. Malatesta
56
4
0
20 Feb 2025
Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional Tokens
Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional Tokens
Vittorio Erba
Emanuele Troiani
Luca Biggio
Antoine Maillard
Lenka Zdeborová
28
0
0
24 Oct 2024
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
40
1
0
13 Sep 2023
Quantifying Relevance in Learning and Inference
Quantifying Relevance in Learning and Inference
M. Marsili
Y. Roudi
14
18
0
01 Feb 2022
Binary perceptron: efficient algorithms can find solutions in a rare
  well-connected cluster
Binary perceptron: efficient algorithms can find solutions in a rare well-connected cluster
Emmanuel Abbe
Shuangping Li
Allan Sly
MQ
28
30
0
04 Nov 2021
Learning through atypical "phase transitions" in overparameterized
  neural networks
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
31
26
0
01 Oct 2021
Entropic alternatives to initialization
Entropic alternatives to initialization
Daniele Musso
37
1
0
16 Jul 2021
Unveiling the structure of wide flat minima in neural networks
Unveiling the structure of wide flat minima in neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
Gabriele Perugini
R. Zecchina
19
32
0
02 Jul 2021
Some Remarks on Replicated Simulated Annealing
Some Remarks on Replicated Simulated Annealing
Vicent Gripon
Matthias Löwe
Franck Vermet
19
2
0
30 Sep 2020
Entropic gradient descent algorithms and wide flat minima
Entropic gradient descent algorithms and wide flat minima
Fabrizio Pittorino
C. Lucibello
Christoph Feinauer
Gabriele Perugini
Carlo Baldassi
Elizaveta Demyanenko
R. Zecchina
ODL
MLT
35
33
0
14 Jun 2020
How to iron out rough landscapes and get optimal performances: Averaged
  Gradient Descent and its application to tensor PCA
How to iron out rough landscapes and get optimal performances: Averaged Gradient Descent and its application to tensor PCA
Giulio Biroli
C. Cammarota
F. Ricci-Tersenghi
39
27
0
29 May 2019
Shaping the learning landscape in neural networks around wide flat
  minima
Shaping the learning landscape in neural networks around wide flat minima
Carlo Baldassi
Fabrizio Pittorino
R. Zecchina
MLT
26
82
0
20 May 2019
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
47
192
0
02 Oct 2018
Optimization of neural networks via finite-value quantum fluctuations
Optimization of neural networks via finite-value quantum fluctuations
Masayuki Ohzeki
Shuntaro Okada
Masayoshi Terabe
S. Taguchi
19
21
0
01 Jul 2018
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian
  Compression Approach
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach
Wenda Zhou
Victor Veitch
Morgane Austern
Ryan P. Adams
Peter Orbanz
44
211
0
16 Apr 2018
Comparing Dynamics: Deep Neural Networks versus Glassy Systems
Comparing Dynamics: Deep Neural Networks versus Glassy Systems
Marco Baity-Jesi
Levent Sagun
Mario Geiger
S. Spigler
Gerard Ben Arous
C. Cammarota
Yann LeCun
M. Wyart
Giulio Biroli
AI4CE
42
113
0
19 Mar 2018
An Optimal Control Approach to Deep Learning and Applications to
  Discrete-Weight Neural Networks
An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks
Qianxiao Li
Shuji Hao
24
75
0
04 Mar 2018
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization
  properties of Entropy-SGD and data-dependent priors
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors
Gintare Karolina Dziugaite
Daniel M. Roy
MLT
30
144
0
26 Dec 2017
A trans-disciplinary review of deep learning research for water
  resources scientists
A trans-disciplinary review of deep learning research for water resources scientists
Chaopeng Shen
AI4CE
33
682
0
06 Dec 2017
Optimal Errors and Phase Transitions in High-Dimensional Generalized
  Linear Models
Optimal Errors and Phase Transitions in High-Dimensional Generalized Linear Models
Jean Barbier
Florent Krzakala
N. Macris
Léo Miolane
Lenka Zdeborová
37
260
0
10 Aug 2017
Deep Relaxation: partial differential equations for optimizing deep
  neural networks
Deep Relaxation: partial differential equations for optimizing deep neural networks
Pratik Chaudhari
Adam M. Oberman
Stanley Osher
Stefano Soatto
G. Carlier
27
153
0
17 Apr 2017
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural
  Networks with Many More Parameters than Training Data
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
Gintare Karolina Dziugaite
Daniel M. Roy
50
799
0
31 Mar 2017
On the energy landscape of deep networks
On the energy landscape of deep networks
Pratik Chaudhari
Stefano Soatto
ODL
43
27
0
20 Nov 2015
1