Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1510.03528
Cited By
ℓ
1
\ell_1
ℓ
1
-regularized Neural Networks are Improperly Learnable in Polynomial Time
13 October 2015
Yuchen Zhang
J. Lee
Michael I. Jordan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"$\ell_1$-regularized Neural Networks are Improperly Learnable in Polynomial Time"
50 / 67 papers shown
Title
Learning Neural Networks with Distribution Shift: Efficiently Certifiable Guarantees
Gautam Chandrasekaran
Adam R. Klivans
Lin Lin Lee
Konstantinos Stavropoulos
OOD
40
0
0
22 Feb 2025
A New Random Reshuffling Method for Nonsmooth Nonconvex Finite-sum Optimization
Junwen Qiu
Xiao Li
Andre Milzarek
39
2
0
02 Dec 2023
On the Convergence and Sample Complexity Analysis of Deep Q-Networks with
ε
ε
ε
-Greedy Exploration
Shuai Zhang
Hongkang Li
Meng Wang
Miao Liu
Pin-Yu Chen
Songtao Lu
Sijia Liu
K. Murugesan
Subhajit Chaudhury
40
19
0
24 Oct 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
32
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
41
7
0
24 Jul 2023
Most Neural Networks Are Almost Learnable
Amit Daniely
Nathan Srebro
Gal Vardi
26
0
0
25 May 2023
Learning Narrow One-Hidden-Layer ReLU Networks
Sitan Chen
Zehao Dou
Surbhi Goel
Adam R. Klivans
Raghu Meka
MLT
24
13
0
20 Apr 2023
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
33
4
0
15 Feb 2023
Is Stochastic Gradient Descent Near Optimal?
Yifan Zhu
Hong Jun Jeon
Benjamin Van Roy
25
2
0
18 Sep 2022
Statistical Guarantees for Approximate Stationary Points of Simple Neural Networks
Mahsa Taheri
Fang Xie
Johannes Lederer
29
0
0
09 May 2022
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
41
22
0
21 Jan 2022
A Survey on Interpretable Reinforcement Learning
Claire Glanois
Paul Weng
Matthieu Zimmer
Dong Li
Tianpei Yang
Jianye Hao
Wulong Liu
OffRL
23
95
0
24 Dec 2021
Learning by Active Forgetting for Neural Networks
Jian Peng
Xian Sun
Min Deng
Chao Tao
Bo Tang
...
Guohua Wu
Qing Zhu
Yu Liu
Tao R. Lin
Haifeng Li
CLL
KELM
AI4CE
26
3
0
21 Nov 2021
A spectral-based analysis of the separation between two-layer neural networks and linear methods
Lei Wu
Jihao Long
18
8
0
10 Aug 2021
Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models
Ilias Diakonikolas
D. Kane
21
32
0
14 Dec 2020
Learning Graph Neural Networks with Approximate Gradient Descent
Qunwei Li
Shaofeng Zou
Leon Wenliang Zhong
GNN
32
1
0
07 Dec 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
From Boltzmann Machines to Neural Networks and Back Again
Surbhi Goel
Adam R. Klivans
Frederic Koehler
19
5
0
25 Jul 2020
Graph Neural Networks Including Sparse Interpretability
Chris Lin
Gerald J. Sun
K. Bulusu
J. Dry
Marylens Hernandez
11
7
0
30 Jun 2020
Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks
Ilias Diakonikolas
D. Kane
Vasilis Kontonis
Nikos Zarifis
14
65
0
22 Jun 2020
Statistical Guarantees for Regularized Neural Networks
Mahsa Taheri
Fang Xie
Johannes Lederer
52
38
0
30 May 2020
Harmonic Decompositions of Convolutional Networks
M. Scetbon
Zaïd Harchaoui
25
7
0
28 Mar 2020
A Spectral Analysis of Dot-product Kernels
M. Scetbon
Zaïd Harchaoui
182
2
0
28 Feb 2020
Generalised Lipschitz Regularisation Equals Distributional Robustness
Zac Cranko
Zhan Shi
Xinhua Zhang
Richard Nock
Simon Kornblith
OOD
20
20
0
11 Feb 2020
Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation
Junjie Liu
Dongchao Wen
Hongxing Gao
Wei Tao
Tse-Wei Chen
Kinya Osa
Masami Kato
22
21
0
13 Nov 2019
Renyi Differentially Private ADMM for Non-Smooth Regularized Optimization
Chen Chen
Jaewoo Lee
17
3
0
18 Sep 2019
Optimizing for Interpretability in Deep Neural Networks with Tree Regularization
Mike Wu
S. Parbhoo
M. C. Hughes
Volker Roth
Finale Doshi-Velez
AI4CE
20
27
0
14 Aug 2019
Stochastic In-Face Frank-Wolfe Methods for Non-Convex Optimization and Sparse Neural Network Training
Paul Grigas
Alfonso Lobos
Nathan Vermeersch
19
5
0
09 Jun 2019
Learning Representations of Graph Data -- A Survey
Mital Kinderkhedia
GNN
17
12
0
07 Jun 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
24
183
0
24 May 2019
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
Ohad Shamir
MLT
26
181
0
01 Apr 2019
Tensor Dropout for Robust Learning
Arinbjorn Kolbeinsson
Jean Kossaifi
Yannis Panagakis
Adrian Bulat
Anima Anandkumar
I. Tzoulaki
Paul Matthews
OOD
28
2
0
27 Feb 2019
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
14
765
0
12 Nov 2018
Learning Two Layer Rectified Neural Networks in Polynomial Time
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
NoLa
15
69
0
05 Nov 2018
On the Convergence Rate of Training Recurrent Neural Networks
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
23
191
0
29 Oct 2018
Rademacher Complexity for Adversarially Robust Generalization
Dong Yin
Kannan Ramchandran
Peter L. Bartlett
AAML
24
257
0
29 Oct 2018
Learning Two-layer Neural Networks with Symmetric Inputs
Rong Ge
Rohith Kuditipudi
Zhize Li
Xiang Wang
OOD
MLT
36
57
0
16 Oct 2018
An ETF view of Dropout regularization
Dor Bank
Raja Giryes
8
4
0
14 Oct 2018
Principled Deep Neural Network Training through Linear Programming
D. Bienstock
Gonzalo Muñoz
Sebastian Pokutta
35
24
0
07 Oct 2018
A Kernel Perspective for Regularizing Deep Neural Networks
A. Bietti
Grégoire Mialon
Dexiong Chen
Julien Mairal
11
15
0
30 Sep 2018
Learning Restricted Boltzmann Machines via Influence Maximization
Guy Bresler
Frederic Koehler
Ankur Moitra
Elchanan Mossel
AI4CE
20
29
0
25 May 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
43
850
0
18 Apr 2018
A comparison of deep networks with ReLU activation function and linear spline-type methods
Konstantin Eckle
Johannes Schmidt-Hieber
17
322
0
06 Apr 2018
A Provably Correct Algorithm for Deep Learning that Actually Works
Eran Malach
Shai Shalev-Shwartz
MLT
18
30
0
26 Mar 2018
On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition
Marco Mondelli
Andrea Montanari
MLT
CML
15
58
0
20 Feb 2018
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
Peter L. Bartlett
D. Helmbold
Philip M. Long
36
116
0
16 Feb 2018
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
28
281
0
16 Nov 2017
Learning One-hidden-layer Neural Networks with Landscape Design
Rong Ge
J. Lee
Tengyu Ma
MLT
29
260
0
01 Nov 2017
Porcupine Neural Networks: (Almost) All Local Optima are Global
S. Feizi
Hamid Javadi
Jesse M. Zhang
David Tse
20
36
0
05 Oct 2017
When is a Convolutional Filter Easy To Learn?
S. Du
J. Lee
Yuandong Tian
MLT
15
130
0
18 Sep 2017
1
2
Next