ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.09195
  4. Cited By
On the minimax optimality and superiority of deep neural network
  learning over sparse parameter spaces

On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces

22 May 2019
Satoshi Hayakawa
Taiji Suzuki
ArXivPDFHTML

Papers citing "On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces"

5 / 5 papers shown
Title
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks
  with Quantum Computation
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
H. Yamasaki
Sathyawageeswar Subramanian
Satoshi Hayakawa
Sho Sonoda
MLT
33
4
0
27 Jan 2023
On the inability of Gaussian process regression to optimally learn
  compositional functions
On the inability of Gaussian process regression to optimally learn compositional functions
M. Giordano
Kolyan Ray
Johannes Schmidt-Hieber
53
12
0
16 May 2022
Drift estimation for a multi-dimensional diffusion process using deep
  neural networks
Drift estimation for a multi-dimensional diffusion process using deep neural networks
Akihiro Oga
Yuta Koike
DiffM
21
5
0
26 Dec 2021
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Rahul Parhi
Robert D. Nowak
61
38
0
18 Sep 2021
Rejoinder: On nearly assumption-free tests of nominal confidence
  interval coverage for causal parameters estimated by machine learning
Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning
Lin Liu
Rajarshi Mukherjee
J. M. Robins
CML
38
16
0
07 Aug 2020
1