ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.10622
  4. Cited By
Diving into the shallows: a computational perspective on large-scale
  shallow learning

Diving into the shallows: a computational perspective on large-scale shallow learning

30 March 2017
Siyuan Ma
M. Belkin
ArXivPDFHTML

Papers citing "Diving into the shallows: a computational perspective on large-scale shallow learning"

13 / 13 papers shown
Title
Many Perception Tasks are Highly Redundant Functions of their Input Data
Many Perception Tasks are Highly Redundant Functions of their Input Data
Rahul Ramesh
Anthony Bisulco
Ronald W. DiTullio
Linran Wei
Vijay Balasubramanian
Kostas Daniilidis
Pratik Chaudhari
41
2
0
18 Jul 2024
Faster Linear Systems and Matrix Norm Approximation via Multi-level Sketched Preconditioning
Faster Linear Systems and Matrix Norm Approximation via Multi-level Sketched Preconditioning
Michal Dereziñski
Christopher Musco
Jiaming Yang
40
2
0
09 May 2024
Changing the Kernel During Training Leads to Double Descent in Kernel Regression
Changing the Kernel During Training Leads to Double Descent in Kernel Regression
Oskar Allerbo
30
0
0
03 Nov 2023
A Simple Algorithm For Scaling Up Kernel Methods
A Simple Algorithm For Scaling Up Kernel Methods
Tengyu Xu
Bryan T. Kelly
Semyon Malamud
11
0
0
26 Jan 2023
RFFNet: Large-Scale Interpretable Kernel Methods via Random Fourier
  Features
RFFNet: Large-Scale Interpretable Kernel Methods via Random Fourier Features
Mateus P. Otto
Rafael Izbicki
27
1
0
11 Nov 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
24
37
0
14 Jul 2022
Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural
  Networks
Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks
P. Esser
L. C. Vankadara
D. Ghoshdastidar
28
53
0
07 Dec 2021
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite
  Width Neural Networks
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks
Adityanarayanan Radhakrishnan
George Stefanakis
M. Belkin
Caroline Uhler
30
25
0
31 Jul 2021
Towards Understanding the Spectral Bias of Deep Learning
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
32
214
0
03 Dec 2019
On the Spectral Bias of Neural Networks
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min-Bin Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
31
1,386
0
22 Jun 2018
Natural Gradients in Practice: Non-Conjugate Variational Inference in
  Gaussian Process Models
Natural Gradients in Practice: Non-Conjugate Variational Inference in Gaussian Process Models
Hugh Salimbeni
Stefanos Eleftheriadis
J. Hensman
BDL
18
85
0
24 Mar 2018
Approximation beats concentration? An approximation view on inference
  with smooth radial kernels
Approximation beats concentration? An approximation view on inference with smooth radial kernels
M. Belkin
26
68
0
10 Jan 2018
The Marginal Value of Adaptive Gradient Methods in Machine Learning
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Ashia C. Wilson
Rebecca Roelofs
Mitchell Stern
Nathan Srebro
Benjamin Recht
ODL
4
1,012
0
23 May 2017
1