ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.05753
  4. Cited By
Subdominant Dense Clusters Allow for Simple Learning and High
  Computational Performance in Neural Networks with Discrete Synapses

Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses

18 September 2015
Carlo Baldassi
Alessandro Ingrosso
Carlo Lucibello
Luca Saglietti
R. Zecchina
ArXiv (abs)PDFHTML

Papers citing "Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses"

50 / 59 papers shown
Title
High-dimensional manifold of solutions in neural networks: insights from statistical physics
High-dimensional manifold of solutions in neural networks: insights from statistical physics
Enrico M. Malatesta
114
4
0
20 Feb 2025
How high dimensional neural dynamics are confined in phase space
How high dimensional neural dynamics are confined in phase space
Shishe Wang
Haiping Huang
AI4CE
42
2
0
25 Oct 2024
Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional Tokens
Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional Tokens
Vittorio Erba
Emanuele Troiani
Luca Biggio
Antoine Maillard
Lenka Zdeborová
202
2
0
24 Oct 2024
Exact full-RSB SAT/UNSAT transition in infinitely wide two-layer neural networks
Exact full-RSB SAT/UNSAT transition in infinitely wide two-layer neural networks
B. Annesi
Enrico M. Malatesta
Francesco Zamponi
92
3
0
09 Oct 2024
Flat Posterior Does Matter For Bayesian Model Averaging
Flat Posterior Does Matter For Bayesian Model Averaging
Sungjun Lim
Jeyoon Yeom
Sooyon Kim
Hoyoon Byun
Jinho Kang
Yohan Jung
Jiyoung Jung
Kyungwoo Song
BDLAAML
120
0
0
21 Jun 2024
BOLD: Boolean Logic Deep Learning
BOLD: Boolean Logic Deep Learning
Van Minh Nguyen
Cristian Ocampo
Aymen Askri
Louis Leconte
Ba-Hien Tran
AI4CE
110
1
0
25 May 2024
Strong convexity-guided hyper-parameter optimization for flatter losses
Strong convexity-guided hyper-parameter optimization for flatter losses
Rahul Yedida
Snehanshu Saha
102
0
0
07 Feb 2024
Boolean Variation and Boolean Logic BackPropagation
Boolean Variation and Boolean Logic BackPropagation
Van Minh Nguyen
91
2
0
13 Nov 2023
Entropy-MCMC: Sampling from Flat Basins with Ease
Entropy-MCMC: Sampling from Flat Basins with Ease
Bolian Li
Ruqi Zhang
76
5
0
09 Oct 2023
Learning Capacity: A Measure of the Effective Dimensionality of a Model
Learning Capacity: A Measure of the Effective Dimensionality of a Model
Daiwei Chen
Wei-Di Chang
Pratik Chaudhari
70
4
0
27 May 2023
Typical and atypical solutions in non-convex neural networks with
  discrete and continuous weights
Typical and atypical solutions in non-convex neural networks with discrete and continuous weights
Carlo Baldassi
Enrico M. Malatesta
Gabriele Perugini
R. Zecchina
MQ
92
13
0
26 Apr 2023
Bayes Complexity of Learners vs Overfitting
Bayes Complexity of Learners vs Overfitting
Grzegorz Gluch
R. Urbanke
UQCVBDL
20
1
0
13 Mar 2023
Deep Networks on Toroids: Removing Symmetries Reveals the Structure of
  Flat Regions in the Landscape Geometry
Deep Networks on Toroids: Removing Symmetries Reveals the Structure of Flat Regions in the Landscape Geometry
Fabrizio Pittorino
Antonio Ferraro
Gabriele Perugini
Christoph Feinauer
Carlo Baldassi
R. Zecchina
265
26
0
07 Feb 2022
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning
  Optimization Landscape
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape
Devansh Bisla
Jing Wang
A. Choromańska
106
38
0
20 Jan 2022
Equivalence between algorithmic instability and transition to replica
  symmetry breaking in perceptron learning systems
Equivalence between algorithmic instability and transition to replica symmetry breaking in perceptron learning systems
Yang Zhao
Junbin Qiu
Mingshan Xie
Haiping Huang
27
4
0
26 Nov 2021
Binary perceptron: efficient algorithms can find solutions in a rare
  well-connected cluster
Binary perceptron: efficient algorithms can find solutions in a rare well-connected cluster
Emmanuel Abbe
Shuangping Li
Allan Sly
MQ
88
33
0
04 Nov 2021
Deep learning via message passing algorithms based on belief propagation
Deep learning via message passing algorithms based on belief propagation
Carlo Lucibello
Fabrizio Pittorino
Gabriele Perugini
R. Zecchina
130
15
0
27 Oct 2021
Learning through atypical "phase transitions" in overparameterized
  neural networks
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
101
27
0
01 Oct 2021
Perturbated Gradients Updating within Unit Space for Deep Learning
Perturbated Gradients Updating within Unit Space for Deep Learning
Ching-Hsun Tseng
Liu Cheng
Shin-Jye Lee
Xiaojun Zeng
113
5
0
01 Oct 2021
Entropic alternatives to initialization
Entropic alternatives to initialization
Daniele Musso
91
1
0
16 Jul 2021
PAC Bayesian Performance Guarantees for Deep (Stochastic) Networks in
  Medical Imaging
PAC Bayesian Performance Guarantees for Deep (Stochastic) Networks in Medical Imaging
Anthony Sicilia
Xingchen Zhao
Anastasia Sosnovskikh
Seong Jae Hwang
BDLUQCV
80
4
0
12 Apr 2021
Proof of the Contiguity Conjecture and Lognormal Limit for the Symmetric
  Perceptron
Proof of the Contiguity Conjecture and Lognormal Limit for the Symmetric Perceptron
Emmanuel Abbe
Shuangping Li
Allan Sly
104
41
0
25 Feb 2021
SALR: Sharpness-aware Learning Rate Scheduler for Improved
  Generalization
SALR: Sharpness-aware Learning Rate Scheduler for Improved Generalization
Xubo Yue
Maher Nouiehed
Raed Al Kontar
ODL
40
4
0
10 Nov 2020
SGB: Stochastic Gradient Bound Method for Optimizing Partition Functions
SGB: Stochastic Gradient Bound Method for Optimizing Partition Functions
Junchang Wang
A. Choromańska
48
0
0
03 Nov 2020
Partial local entropy and anisotropy in deep weight spaces
Partial local entropy and anisotropy in deep weight spaces
Daniele Musso
57
3
0
17 Jul 2020
Data-driven effective model shows a liquid-like deep learning
Data-driven effective model shows a liquid-like deep learning
Wenxuan Zou
Haiping Huang
71
2
0
16 Jul 2020
Entropic gradient descent algorithms and wide flat minima
Entropic gradient descent algorithms and wide flat minima
Fabrizio Pittorino
Carlo Lucibello
Christoph Feinauer
Gabriele Perugini
Carlo Baldassi
Elizaveta Demyanenko
R. Zecchina
ODLMLT
113
33
0
14 Jun 2020
Large deviations for the perceptron model and consequences for active
  learning
Large deviations for the perceptron model and consequences for active learning
Hugo Cui
Luca Saglietti
Lenka Zdeborová
62
11
0
09 Dec 2019
Clustering of solutions in the symmetric binary perceptron
Clustering of solutions in the symmetric binary perceptron
Carlo Baldassi
R. D. Vecchia
Carlo Lucibello
R. Zecchina
45
16
0
15 Nov 2019
Mean-field inference methods for neural networks
Mean-field inference methods for neural networks
Marylou Gabrié
AI4CE
127
33
0
03 Nov 2019
Properties of the geometry of solutions and capacity of multi-layer
  neural networks with Rectified Linear Units activations
Properties of the geometry of solutions and capacity of multi-layer neural networks with Rectified Linear Units activations
Carlo Baldassi
Enrico M. Malatesta
R. Zecchina
MLT
59
44
0
17 Jul 2019
Shaping the learning landscape in neural networks around wide flat
  minima
Shaping the learning landscape in neural networks around wide flat minima
Carlo Baldassi
Fabrizio Pittorino
R. Zecchina
MLT
75
84
0
20 May 2019
MUSCO: Multi-Stage Compression of neural networks
MUSCO: Multi-Stage Compression of neural networks
Julia Gusak
Maksym Kholiavchenko
E. Ponomarev
L. Markeeva
Ivan Oseledets
A. Cichocki
81
9
0
24 Mar 2019
Active online learning in the binary perceptron problem
Active online learning in the binary perceptron problem
Haijun Zhou
27
5
0
21 Feb 2019
Variational Characterizations of Local Entropy and Heat Regularization
  in Deep Learning
Variational Characterizations of Local Entropy and Heat Regularization in Deep Learning
Nicolas García Trillos
Zachary T. Kaplan
D. Sanz-Alonso
ODL
57
3
0
29 Jan 2019
Optimization of neural networks via finite-value quantum fluctuations
Optimization of neural networks via finite-value quantum fluctuations
Masayuki Ohzeki
Shuntaro Okada
Masayoshi Terabe
S. Taguchi
48
21
0
01 Jul 2018
EasyConvPooling: Random Pooling with Easy Convolution for Accelerating
  Training and Testing
EasyConvPooling: Random Pooling with Easy Convolution for Accelerating Training and Testing
Jianzhong Sheng
Chuanbo Chen
Chenchen Fu
Chun Jason Xue
82
5
0
05 Jun 2018
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian
  Compression Approach
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach
Wenda Zhou
Victor Veitch
Morgane Austern
Ryan P. Adams
Peter Orbanz
94
215
0
16 Apr 2018
Iterative Low-Rank Approximation for CNN Compression
Iterative Low-Rank Approximation for CNN Compression
Maksym Kholiavchenko
29
9
0
23 Mar 2018
An Optimal Control Approach to Deep Learning and Applications to
  Discrete-Weight Neural Networks
An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks
Qianxiao Li
Shuji Hao
96
76
0
04 Mar 2018
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization
  properties of Entropy-SGD and data-dependent priors
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors
Gintare Karolina Dziugaite
Daniel M. Roy
MLT
92
145
0
26 Dec 2017
Convergent Block Coordinate Descent for Training Tikhonov Regularized
  Deep Neural Networks
Convergent Block Coordinate Descent for Training Tikhonov Regularized Deep Neural Networks
Ziming Zhang
M. Brand
59
71
0
20 Nov 2017
Stochastic gradient descent performs variational inference, converges to
  limit cycles for deep networks
Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks
Pratik Chaudhari
Stefano Soatto
MLT
104
304
0
30 Oct 2017
On the role of synaptic stochasticity in training low-precision neural
  networks
On the role of synaptic stochasticity in training low-precision neural networks
Carlo Baldassi
Federica Gerace
H. Kappen
Carlo Lucibello
Luca Saglietti
Enzo Tartaglione
R. Zecchina
55
23
0
26 Oct 2017
Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for
  $k$-means Clustering
Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for kkk-means Clustering
Penghang Yin
Minh Pham
Adam M. Oberman
Stanley Osher
FedML
86
16
0
21 Oct 2017
Towards Understanding Generalization of Deep Learning: Perspective of
  Loss Landscapes
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
Lei Wu
Zhanxing Zhu
E. Weinan
ODL
71
220
0
30 Jun 2017
Efficiency of quantum versus classical annealing in non-convex learning
  problems
Efficiency of quantum versus classical annealing in non-convex learning problems
Carlo Baldassi
R. Zecchina
78
45
0
26 Jun 2017
Training Quantized Nets: A Deeper Understanding
Training Quantized Nets: A Deeper Understanding
Hao Li
Soham De
Zheng Xu
Christoph Studer
H. Samet
Tom Goldstein
MQ
91
211
0
07 Jun 2017
Deep Relaxation: partial differential equations for optimizing deep
  neural networks
Deep Relaxation: partial differential equations for optimizing deep neural networks
Pratik Chaudhari
Adam M. Oberman
Stanley Osher
Stefano Soatto
G. Carlier
174
154
0
17 Apr 2017
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural
  Networks with Many More Parameters than Training Data
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
Gintare Karolina Dziugaite
Daniel M. Roy
128
820
0
31 Mar 2017
12
Next