ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.10032
  4. Cited By
Approximation and Learning with Deep Convolutional Models: a Kernel
  Perspective

Approximation and Learning with Deep Convolutional Models: a Kernel Perspective

19 February 2021
A. Bietti
ArXivPDFHTML

Papers citing "Approximation and Learning with Deep Convolutional Models: a Kernel Perspective"

12 / 12 papers shown
Title
U-Nets as Belief Propagation: Efficient Classification, Denoising, and
  Diffusion in Generative Hierarchical Models
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models
Song Mei
3DV
AI4CE
DiffM
41
11
0
29 Apr 2024
Theoretical Analysis of Inductive Biases in Deep Convolutional Networks
Theoretical Analysis of Inductive Biases in Deep Convolutional Networks
Zihao Wang
Lei Wu
23
19
0
15 May 2023
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
Vivien A. Cabannes
B. Kiani
Randall Balestriero
Yann LeCun
A. Bietti
SSL
11
31
0
06 Feb 2023
Strong inductive biases provably prevent harmless interpolation
Strong inductive biases provably prevent harmless interpolation
Michael Aerni
Marco Milanta
Konstantin Donhauser
Fanny Yang
30
9
0
18 Jan 2023
On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks
On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks
Hubert Leterme
K. Polisano
V. Perrier
Alahari Karteek
FAtt
38
2
0
19 Sep 2022
On the Spectral Bias of Convolutional Neural Tangent and Gaussian
  Process Kernels
On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels
Amnon Geifman
Meirav Galun
David Jacobs
Ronen Basri
19
13
0
17 Mar 2022
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral
  Conditions
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions
Maksim Velikanov
Dmitry Yarotsky
4
6
0
02 Feb 2022
Learning with convolution and pooling operations in kernel methods
Learning with convolution and pooling operations in kernel methods
Theodor Misiakiewicz
Song Mei
MLT
15
29
0
16 Nov 2021
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
Ido Nachum
Jan Hkazla
Michael C. Gastpar
Anatoly Khina
28
0
0
03 Nov 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
27
229
0
27 Jul 2021
Learning with invariances in random features and kernel models
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
46
89
0
25 Feb 2021
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
220
348
0
14 Jun 2018
1