ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.09375
  4. Cited By
Theoretical Issues in Deep Networks: Approximation, Optimization and
  Generalization

Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization

25 August 2019
T. Poggio
Andrzej Banburski
Q. Liao
    ODL
ArXivPDFHTML

Papers citing "Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization"

50 / 72 papers shown
Title
Enhancing Physics-Informed Neural Networks with a Hybrid Parallel Kolmogorov-Arnold and MLP Architecture
Enhancing Physics-Informed Neural Networks with a Hybrid Parallel Kolmogorov-Arnold and MLP Architecture
Zuyu Xu
Bin Lv
41
0
0
30 Mar 2025
A Genetic Algorithm-Based Approach for Automated Optimization of Kolmogorov-Arnold Networks in Classification Tasks
A Genetic Algorithm-Based Approach for Automated Optimization of Kolmogorov-Arnold Networks in Classification Tasks
Quan Long
Bin Wang
Bing Xue
Mengjie Zhang
53
0
0
29 Jan 2025
Efficiency Bottlenecks of Convolutional Kolmogorov-Arnold Networks: A Comprehensive Scrutiny with ImageNet, AlexNet, LeNet and Tabular Classification
Efficiency Bottlenecks of Convolutional Kolmogorov-Arnold Networks: A Comprehensive Scrutiny with ImageNet, AlexNet, LeNet and Tabular Classification
Ashim Dahal
Saydul Akbar Murad
Nick Rahimi
42
0
0
27 Jan 2025
Dissecting a Small Artificial Neural Network
Dissecting a Small Artificial Neural Network
Xiguang Yang
Krish Arora
Michael Bachmann
32
0
0
03 Jan 2025
Gradient Boosting Trees and Large Language Models for Tabular Data
  Few-Shot Learning
Gradient Boosting Trees and Large Language Models for Tabular Data Few-Shot Learning
Carlos Huertas
LMTD
56
1
0
06 Nov 2024
A resource-efficient model for deep kernel learning
A resource-efficient model for deep kernel learning
Luisa DÁmore
24
0
0
13 Oct 2024
KAN: Kolmogorov-Arnold Networks
KAN: Kolmogorov-Arnold Networks
Ziming Liu
Yixuan Wang
Sachin Vaidya
Fabian Ruehle
James Halverson
Marin Soljacic
Thomas Y. Hou
Max Tegmark
98
485
0
30 Apr 2024
Training all-mechanical neural networks for task learning through in
  situ backpropagation
Training all-mechanical neural networks for task learning through in situ backpropagation
Shuaifeng Li
Xiaoming Mao
AI4CE
37
1
0
23 Apr 2024
Generative Subspace Adversarial Active Learning for Outlier Detection in
  Multiple Views of High-dimensional Data
Generative Subspace Adversarial Active Learning for Outlier Detection in Multiple Views of High-dimensional Data
Jose Cribeiro-Ramallo
Vadim Arzamasov
Federico Matteucci
Denis Wambold
Klemens Bohm
34
1
0
20 Apr 2024
A Unified Kernel for Neural Network Learning
A Unified Kernel for Neural Network Learning
Shao-Qun Zhang
Zong-Yi Chen
Yong-Ming Tian
Xun Lu
29
1
0
26 Mar 2024
Beyond Single-Model Views for Deep Learning: Optimization versus
  Generalizability of Stochastic Optimization Algorithms
Beyond Single-Model Views for Deep Learning: Optimization versus Generalizability of Stochastic Optimization Algorithms
Toki Tahmid Inan
Mingrui Liu
Amarda Shehu
36
0
0
01 Mar 2024
Surfing the modeling of PoS taggers in low-resource scenarios
Surfing the modeling of PoS taggers in low-resource scenarios
M. Ferro
V. Darriba
F. J. Ribadas
J. G. Gil
19
0
0
04 Feb 2024
Understanding and Leveraging the Learning Phases of Neural Networks
Understanding and Leveraging the Learning Phases of Neural Networks
Johannes Schneider
Mohit Prabhushankar
AI4CE
19
1
0
11 Dec 2023
Efficient Neural Networks for Tiny Machine Learning: A Comprehensive
  Review
Efficient Neural Networks for Tiny Machine Learning: A Comprehensive Review
M. Lê
Pierre Wolinski
Julyan Arbel
36
8
0
20 Nov 2023
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
40
1
0
13 Sep 2023
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Zhengdao Chen
44
1
0
03 Jul 2023
Why do CNNs excel at feature extraction? A mathematical explanation
Why do CNNs excel at feature extraction? A mathematical explanation
V. Nandakumar
Arush Tagade
Tongliang Liu
FAtt
11
0
0
03 Jul 2023
Homological Neural Networks: A Sparse Architecture for Multivariate
  Complexity
Homological Neural Networks: A Sparse Architecture for Multivariate Complexity
Yuanrong Wang
Antonio Briola
T. Aste
46
6
0
27 Jun 2023
Evaluating Machine Learning Models with NERO: Non-Equivariance Revealed
  on Orbits
Evaluating Machine Learning Models with NERO: Non-Equivariance Revealed on Orbits
Zhuokai Zhao
Takumi Matsuzawa
W. Irvine
Michael Maire
G. Kindlmann
54
2
0
31 May 2023
Learning Capacity: A Measure of the Effective Dimensionality of a Model
Learning Capacity: A Measure of the Effective Dimensionality of a Model
Daiwei Chen
Wei-Di Chang
Pratik Chaudhari
37
3
0
27 May 2023
Performance Limits of a Deep Learning-Enabled Text Semantic
  Communication under Interference
Performance Limits of a Deep Learning-Enabled Text Semantic Communication under Interference
T. Getu
Walid Saad
Georges Kaddoum
M. Bennis
26
8
0
15 Feb 2023
Implicit regularization in Heavy-ball momentum accelerated stochastic
  gradient descent
Implicit regularization in Heavy-ball momentum accelerated stochastic gradient descent
Avrajit Ghosh
He Lyu
Xitong Zhang
Rongrong Wang
53
21
0
02 Feb 2023
Deep networks for system identification: a Survey
Deep networks for system identification: a Survey
G. Pillonetto
Aleksandr Aravkin
Daniel Gedon
L. Ljung
Antônio H. Ribeiro
Thomas B. Schon
OOD
37
36
0
30 Jan 2023
Norm-based Generalization Bounds for Compositionally Sparse Neural
  Networks
Norm-based Generalization Bounds for Compositionally Sparse Neural Networks
Tomer Galanti
Mengjia Xu
Liane Galanti
T. Poggio
38
9
0
28 Jan 2023
Deep Learning Meets Sparse Regularization: A Signal Processing
  Perspective
Deep Learning Meets Sparse Regularization: A Signal Processing Perspective
Rahul Parhi
Robert D. Nowak
43
25
0
23 Jan 2023
Exploring the Approximation Capabilities of Multiplicative Neural
  Networks for Smooth Functions
Exploring the Approximation Capabilities of Multiplicative Neural Networks for Smooth Functions
Ido Ben-Shaul
Tomer Galanti
S. Dekel
31
3
0
11 Jan 2023
A Dynamics Theory of Implicit Regularization in Deep Low-Rank Matrix
  Factorization
A Dynamics Theory of Implicit Regularization in Deep Low-Rank Matrix Factorization
JIAN-PENG Cao
Chao Qian
Yihui Huang
Dicheng Chen
Yuncheng Gao
Jiyang Dong
D. Guo
X. Qu
24
1
0
29 Dec 2022
Quantum Policy Gradient Algorithm with Optimized Action Decoding
Quantum Policy Gradient Algorithm with Optimized Action Decoding
Nico Meyer
Daniel D. Scherer
Axel Plinge
Christopher Mutschler
M. Hartmann
29
20
0
13 Dec 2022
Super-model ecosystem: A domain-adaptation perspective
Super-model ecosystem: A domain-adaptation perspective
Fengxiang He
Dacheng Tao
DiffM
35
1
0
30 Aug 2022
What Can Be Learnt With Wide Convolutional Neural Networks?
What Can Be Learnt With Wide Convolutional Neural Networks?
Francesco Cagnetta
Alessandro Favero
M. Wyart
MLT
41
12
0
01 Aug 2022
Biologically Plausible Training of Deep Neural Networks Using a Top-down
  Credit Assignment Network
Biologically Plausible Training of Deep Neural Networks Using a Top-down Credit Assignment Network
Jian-Hui Chen
Cheng-Lin Liu
Zuoren Wang
26
0
0
01 Aug 2022
Blind Estimation of a Doubly Selective OFDM Channel: A Deep Learning
  Algorithm and Theory
Blind Estimation of a Doubly Selective OFDM Channel: A Deep Learning Algorithm and Theory
T. Getu
N. Golmie
D. Griffith
22
2
0
30 May 2022
A Falsificationist Account of Artificial Neural Networks
A Falsificationist Account of Artificial Neural Networks
O. Buchholz
Eric Raidl
AI4CE
17
4
0
03 May 2022
On the influence of over-parameterization in manifold based surrogates
  and deep neural operators
On the influence of over-parameterization in manifold based surrogates and deep neural operators
Katiana Kontolati
S. Goswami
Michael D. Shields
George Karniadakis
17
41
0
09 Mar 2022
Explicit Regularization via Regularizer Mirror Descent
Explicit Regularization via Regularizer Mirror Descent
Navid Azizan
Sahin Lale
B. Hassibi
20
4
0
22 Feb 2022
The learning phases in NN: From Fitting the Majority to Fitting a Few
The learning phases in NN: From Fitting the Majority to Fitting a Few
Johannes Schneider
14
0
0
16 Feb 2022
Neural Capacitance: A New Perspective of Neural Network Selection via
  Edge Dynamics
Neural Capacitance: A New Perspective of Neural Network Selection via Edge Dynamics
Chunheng Jiang
Tejaswini Pedapati
Pin-Yu Chen
Yizhou Sun
Jianxi Gao
24
2
0
11 Jan 2022
Federated Optimization of Smooth Loss Functions
Federated Optimization of Smooth Loss Functions
Ali Jadbabaie
A. Makur
Devavrat Shah
FedML
24
7
0
06 Jan 2022
On the Role of Neural Collapse in Transfer Learning
On the Role of Neural Collapse in Transfer Learning
Tomer Galanti
András Gyorgy
Marcus Hutter
SSL
21
88
0
30 Dec 2021
Towards the One Learning Algorithm Hypothesis: A System-theoretic
  Approach
Towards the One Learning Algorithm Hypothesis: A System-theoretic Approach
Christos N. Mavridis
John S. Baras
26
1
0
04 Dec 2021
Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU
  Neural Networks
Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU Neural Networks
T. Getu
27
2
0
25 Nov 2021
Conditionally Gaussian PAC-Bayes
Conditionally Gaussian PAC-Bayes
Eugenio Clerico
George Deligiannidis
Arnaud Doucet
37
10
0
22 Oct 2021
The Tensor Brain: A Unified Theory of Perception, Memory and Semantic
  Decoding
The Tensor Brain: A Unified Theory of Perception, Memory and Semantic Decoding
Volker Tresp
Sahand Sharifzadeh
Hang Li
Dario Konopatzki
Yunpu Ma
30
5
0
27 Sep 2021
Task Guided Compositional Representation Learning for ZDA
Task Guided Compositional Representation Learning for ZDA
Shuang Liu
Mete Ozay
OOD
25
0
0
13 Sep 2021
Towards Understanding Theoretical Advantages of Complex-Reaction
  Networks
Towards Understanding Theoretical Advantages of Complex-Reaction Networks
Shao-Qun Zhang
Gaoxin Wei
Zhi-Hua Zhou
23
17
0
15 Aug 2021
Distribution of Classification Margins: Are All Data Equal?
Distribution of Classification Margins: Are All Data Equal?
Andrzej Banburski
Fernanda De La Torre
Nishka Pant
Ishana Shastri
T. Poggio
33
4
0
21 Jul 2021
Convergence rates for shallow neural networks learned by gradient
  descent
Convergence rates for shallow neural networks learned by gradient descent
Alina Braun
Michael Kohler
S. Langer
Harro Walk
22
10
0
20 Jul 2021
Estimation of a regression function on a manifold by fully connected
  deep neural networks
Estimation of a regression function on a manifold by fully connected deep neural networks
Michael Kohler
S. Langer
U. Reif
22
4
0
20 Jul 2021
Theory of Deep Convolutional Neural Networks III: Approximating Radial
  Functions
Theory of Deep Convolutional Neural Networks III: Approximating Radial Functions
Tong Mao
Zhongjie Shi
Ding-Xuan Zhou
16
33
0
02 Jul 2021
Information Bottleneck: Exact Analysis of (Quantized) Neural Networks
Information Bottleneck: Exact Analysis of (Quantized) Neural Networks
S. Lorenzen
Christian Igel
M. Nielsen
MQ
11
17
0
24 Jun 2021
12
Next