ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.10008
  4. Cited By
Kernel-Based Smoothness Analysis of Residual Networks
v1v2 (latest)

Kernel-Based Smoothness Analysis of Residual Networks

21 September 2020
Tom Tirer
Joan Bruna
Raja Giryes
ArXiv (abs)PDFHTML

Papers citing "Kernel-Based Smoothness Analysis of Residual Networks"

32 / 32 papers shown
Title
Deep Equals Shallow for ReLU Networks in Kernel Regimes
Deep Equals Shallow for ReLU Networks in Kernel Regimes
A. Bietti
Francis R. Bach
90
90
0
30 Sep 2020
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
Lin Chen
Sheng Xu
169
94
0
22 Sep 2020
On the Similarity between the Laplace and Neural Tangent Kernels
On the Similarity between the Laplace and Neural Tangent Kernels
Amnon Geifman
A. Yadav
Yoni Kasten
Meirav Galun
David Jacobs
Ronen Basri
130
96
0
03 Jul 2020
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Greg Yang
114
139
0
25 Jun 2020
Infinite attention: NNGP and NTK for deep attention networks
Infinite attention: NNGP and NTK for deep attention networks
Jiri Hron
Yasaman Bahri
Jascha Narain Sohl-Dickstein
Roman Novak
57
116
0
18 Jun 2020
The Recurrent Neural Tangent Kernel
The Recurrent Neural Tangent Kernel
Sina Alemohammad
Zichao Wang
Randall Balestriero
Richard Baraniuk
AAML
70
78
0
18 Jun 2020
A function space analysis of finite neural networks with insights from
  sampling theory
A function space analysis of finite neural networks with insights from sampling theory
Raja Giryes
56
6
0
15 Apr 2020
Why Do Deep Residual Networks Generalize Better than Deep Feedforward
  Networks? -- A Neural Tangent Kernel Perspective
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective
Kaixuan Huang
Yuqing Wang
Molei Tao
T. Zhao
MLT
55
98
0
14 Feb 2020
On Random Kernels of Residual Architectures
On Random Kernels of Residual Architectures
Etai Littwin
Tomer Galanti
Lior Wolf
57
4
0
28 Jan 2020
Dynamical System Inspired Adaptive Time Stepping Controller for Residual
  Network Families
Dynamical System Inspired Adaptive Time Stepping Controller for Residual Network Families
Yibo Yang
Jianlong Wu
Hongyang Li
Xia Li
Tiancheng Shen
Zhouchen Lin
OOD
63
21
0
23 Nov 2019
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any
  Architecture are Gaussian Processes
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
Greg Yang
141
201
0
28 Oct 2019
Finite Depth and Width Corrections to the Neural Tangent Kernel
Finite Depth and Width Corrections to the Neural Tangent Kernel
Boris Hanin
Mihai Nica
MDE
79
152
0
13 Sep 2019
Gradient Dynamics of Shallow Univariate ReLU Networks
Gradient Dynamics of Shallow Univariate ReLU Networks
Francis Williams
Matthew Trager
Claudio Silva
Daniele Panozzo
Denis Zorin
Joan Bruna
64
80
0
18 Jun 2019
The Convergence Rate of Neural Networks for Learned Functions of
  Different Frequencies
The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies
Ronen Basri
David Jacobs
Yoni Kasten
S. Kritchman
84
218
0
02 Jun 2019
On the Inductive Bias of Neural Tangent Kernels
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
101
260
0
29 May 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DVMedIm
172
18,224
0
28 May 2019
On Exact Computation with an Infinitely Wide Neural Net
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
243
928
0
26 Apr 2019
Towards Robust ResNet: A Small Step but A Giant Leap
Towards Robust ResNet: A Small Step but A Giant Leap
Jingfeng Zhang
Bo Han
L. Wynter
K. H. Low
Mohan Kankanhalli
93
41
0
28 Feb 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient
  Descent
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
213
1,110
0
18 Feb 2019
Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian
  Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation
Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation
Greg Yang
176
289
0
13 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
223
974
0
24 Jan 2019
On Lazy Training in Differentiable Programming
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
111
840
0
19 Dec 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
Jason D. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
231
1,136
0
09 Nov 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
275
3,225
0
20 Jun 2018
Visualizing the Loss Landscape of Neural Nets
Visualizing the Loss Landscape of Neural Nets
Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
266
1,901
0
28 Dec 2017
Deep Neural Networks as Gaussian Processes
Deep Neural Networks as Gaussian Processes
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCVBDL
139
1,100
0
01 Nov 2017
Parsimonious Online Learning with Kernels via Sparse Projections in
  Function Space
Parsimonious Online Learning with Kernels via Sparse Projections in Function Space
Alec Koppel
Garrett A. Warnell
Ethan Stump
Alejandro Ribeiro
56
79
0
13 Dec 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN3DV
834
36,910
0
25 Aug 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.3K
194,641
0
10 Dec 2015
Optimal Rates for Random Fourier Features
Optimal Rates for Random Fourier Features
Bharath K. Sriperumbudur
Z. Szabó
100
130
0
06 Jun 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.1K
150,433
0
22 Dec 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAttMDE
1.7K
100,575
0
04 Sep 2014
1