ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.02547
  4. Cited By
Learning One Convolutional Layer with Overlapping Patches

Learning One Convolutional Layer with Overlapping Patches

7 February 2018
Surbhi Goel
Adam R. Klivans
Raghu Meka
    MLT
ArXivPDFHTML

Papers citing "Learning One Convolutional Layer with Overlapping Patches"

21 / 21 papers shown
Title
A faster and simpler algorithm for learning shallow networks
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
41
7
0
24 Jul 2023
Over-Parameterization Exponentially Slows Down Gradient Descent for
  Learning a Single Neuron
Over-Parameterization Exponentially Slows Down Gradient Descent for Learning a Single Neuron
Weihang Xu
S. Du
37
16
0
20 Feb 2023
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
61
30
0
04 Apr 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
24
30
0
10 Feb 2022
Parameter identifiability of a deep feedforward ReLU neural network
Parameter identifiability of a deep feedforward ReLU neural network
Joachim Bona-Pellissier
François Bachoc
François Malgouyres
41
15
0
24 Dec 2021
Neural networks with linear threshold activations: structure and
  algorithms
Neural networks with linear threshold activations: structure and algorithms
Sammy Khalife
Hongyu Cheng
A. Basu
42
14
0
15 Nov 2021
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
45
8
0
08 Nov 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer
  Neural Network
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
76
45
0
04 Feb 2021
From Local Pseudorandom Generators to Hardness of Learning
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
Learning Graph Neural Networks with Approximate Gradient Descent
Learning Graph Neural Networks with Approximate Gradient Descent
Qunwei Li
Shaofeng Zou
Leon Wenliang Zhong
GNN
32
1
0
07 Dec 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Nonparametric Learning of Two-Layer ReLU Residual Units
Nonparametric Learning of Two-Layer ReLU Residual Units
Zhunxuan Wang
Linyun He
Chunchuan Lyu
Shay B. Cohen
MLT
OffRL
33
1
0
17 Aug 2020
From Boltzmann Machines to Neural Networks and Back Again
From Boltzmann Machines to Neural Networks and Back Again
Surbhi Goel
Adam R. Klivans
Frederic Koehler
19
5
0
25 Jul 2020
Approximation Schemes for ReLU Regression
Approximation Schemes for ReLU Regression
Ilias Diakonikolas
Surbhi Goel
Sushrut Karmalkar
Adam R. Klivans
Mahdi Soltanolkotabi
18
51
0
26 May 2020
Learning Distributions Generated by One-Layer ReLU Networks
Learning Distributions Generated by One-Layer ReLU Networks
Shanshan Wu
A. Dimakis
Sujay Sanghavi
11
22
0
04 Sep 2019
A Selective Overview of Deep Learning
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDL
VLM
38
136
0
10 Apr 2019
Convex Relaxations of Convolutional Neural Nets
Convex Relaxations of Convolutional Neural Nets
Burak Bartan
Mert Pilanci
20
5
0
31 Dec 2018
On the Convergence Rate of Training Recurrent Neural Networks
On the Convergence Rate of Training Recurrent Neural Networks
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
23
191
0
29 Oct 2018
Learning Two-layer Neural Networks with Symmetric Inputs
Learning Two-layer Neural Networks with Symmetric Inputs
Rong Ge
Rohith Kuditipudi
Zhize Li
Xiang Wang
OOD
MLT
36
57
0
16 Oct 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent
  Neural Network?
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
21
57
0
21 May 2018
Improved Learning of One-hidden-layer Convolutional Neural Networks with
  Overlaps
Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps
S. Du
Surbhi Goel
MLT
30
17
0
20 May 2018
1