ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.07114
  4. Cited By
Training Neural Networks as Learning Data-adaptive Kernels: Provable
  Representation and Approximation Benefits

Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits

21 January 2019
Xialiang Dou
Tengyuan Liang
    MLT
ArXivPDFHTML

Papers citing "Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits"

6 / 6 papers shown
Title
Nonparametric Teaching for Graph Property Learners
Nonparametric Teaching for Graph Property Learners
Chen Zhang
Weixin Bu
Zhaochun Ren
Ziyue Liu
Yik-Chung Wu
Ngai Wong
54
0
0
20 May 2025
SGD with memory: fundamental properties and stochastic acceleration
SGD with memory: fundamental properties and stochastic acceleration
Dmitry Yarotsky
Maksim Velikanov
50
1
0
05 Oct 2024
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
94
737
0
19 Mar 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
150
1,628
0
28 Dec 2018
Gradient Descent Quantizes ReLU Network Features
Gradient Descent Quantizes ReLU Network Features
Hartmut Maennel
Olivier Bousquet
Sylvain Gelly
MLT
26
80
0
22 Mar 2018
The Power of Interpolation: Understanding the Effectiveness of SGD in
  Modern Over-parametrized Learning
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning
Siyuan Ma
Raef Bassily
M. Belkin
41
289
0
18 Dec 2017
1