ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.01369
  4. Cited By
Computational Separation Between Convolutional and Fully-Connected
  Networks

Computational Separation Between Convolutional and Fully-Connected Networks

3 October 2020
Eran Malach
Shai Shalev-Shwartz
ArXivPDFHTML

Papers citing "Computational Separation Between Convolutional and Fully-Connected Networks"

20 / 20 papers shown
Title
A Provably Effective Method for Pruning Experts in Fine-tuned Sparse
  Mixture-of-Experts
A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts
Mohammed Nowaz Rabbani Chowdhury
Meng Wang
K. E. Maghraoui
Naigang Wang
Pin-Yu Chen
Christopher Carothers
MoE
31
4
0
26 May 2024
Role of Locality and Weight Sharing in Image-Based Tasks: A Sample
  Complexity Separation between CNNs, LCNs, and FCNs
Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs
Aakash Lahoti
Stefani Karp
Ezra Winston
Aarti Singh
Yuanzhi Li
21
0
0
23 Mar 2024
Kernels, Data & Physics
Kernels, Data & Physics
Francesco Cagnetta
Deborah Oliveira
Mahalakshmi Sabanayagam
Nikolaos Tsilivis
Julia Kempe
20
0
0
05 Jul 2023
Provable Advantage of Curriculum Learning on Parity Targets with Mixed
  Inputs
Provable Advantage of Curriculum Learning on Parity Targets with Mixed Inputs
Emmanuel Abbe
Elisabetta Cornacchia
Aryo Lotfi
26
11
0
29 Jun 2023
Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient
  for Convolutional Neural Networks
Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Mohammed Nowaz Rabbani Chowdhury
Shuai Zhang
M. Wang
Sijia Liu
Pin-Yu Chen
MoE
21
17
0
07 Jun 2023
A Mathematical Model for Curriculum Learning for Parities
A Mathematical Model for Curriculum Learning for Parities
Elisabetta Cornacchia
Elchanan Mossel
32
10
0
31 Jan 2023
A Kernel Perspective of Skip Connections in Convolutional Networks
A Kernel Perspective of Skip Connections in Convolutional Networks
Daniel Barzilai
Amnon Geifman
Meirav Galun
Ronen Basri
15
11
0
27 Nov 2022
On the non-universality of deep learning: quantifying the cost of
  symmetry
On the non-universality of deep learning: quantifying the cost of symmetry
Emmanuel Abbe
Enric Boix-Adserà
FedML
MLT
22
18
0
05 Aug 2022
What Can Be Learnt With Wide Convolutional Neural Networks?
What Can Be Learnt With Wide Convolutional Neural Networks?
Francesco Cagnetta
Alessandro Favero
M. Wyart
MLT
25
11
0
01 Aug 2022
An initial alignment between neural network and target is needed for
  gradient descent to learn
An initial alignment between neural network and target is needed for gradient descent to learn
Emmanuel Abbe
Elisabetta Cornacchia
Jan Hązła
Christopher Marquis
16
16
0
25 Feb 2022
Eigenspace Restructuring: a Principle of Space and Frequency in Neural
  Networks
Eigenspace Restructuring: a Principle of Space and Frequency in Neural Networks
Lechao Xiao
26
21
0
10 Dec 2021
Learning with convolution and pooling operations in kernel methods
Learning with convolution and pooling operations in kernel methods
Theodor Misiakiewicz
Song Mei
MLT
13
29
0
16 Nov 2021
On the Power of Differentiable Learning versus PAC and SQ Learning
On the Power of Differentiable Learning versus PAC and SQ Learning
Emmanuel Abbe
Pritish Kamath
Eran Malach
Colin Sandon
Nathan Srebro
MLT
69
23
0
09 Aug 2021
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
22
31
0
16 Jun 2021
On the Sample Complexity of Learning under Invariance and Geometric
  Stability
On the Sample Complexity of Learning under Invariance and Geometric Stability
A. Bietti
Luca Venturi
Joan Bruna
22
5
0
14 Jun 2021
Video Super-Resolution Transformer
Video Super-Resolution Transformer
Jie Cao
Yawei Li
K. Zhang
Luc Van Gool
ViT
28
166
0
12 Jun 2021
Approximation and Learning with Deep Convolutional Models: a Kernel
  Perspective
Approximation and Learning with Deep Convolutional Models: a Kernel Perspective
A. Bietti
22
29
0
19 Feb 2021
Towards Learning Convolutions from Scratch
Towards Learning Convolutions from Scratch
Behnam Neyshabur
SSL
218
71
0
27 Jul 2020
On Translation Invariance in CNNs: Convolutional Layers can Exploit
  Absolute Spatial Location
On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location
O. Kayhan
J. C. V. Gemert
209
232
0
16 Mar 2020
Convolution by Evolution: Differentiable Pattern Producing Networks
Convolution by Evolution: Differentiable Pattern Producing Networks
Chrisantha Fernando
Dylan Banarse
Malcolm Reynolds
F. Besse
David Pfau
Max Jaderberg
Marc Lanctot
Daan Wierstra
191
102
0
08 Jun 2016
1