Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.09195
Cited By
On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
22 May 2019
Satoshi Hayakawa
Taiji Suzuki
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces"
5 / 5 papers shown
Title
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
H. Yamasaki
Sathyawageeswar Subramanian
Satoshi Hayakawa
Sho Sonoda
MLT
33
4
0
27 Jan 2023
On the inability of Gaussian process regression to optimally learn compositional functions
M. Giordano
Kolyan Ray
Johannes Schmidt-Hieber
55
12
0
16 May 2022
Drift estimation for a multi-dimensional diffusion process using deep neural networks
Akihiro Oga
Yuta Koike
DiffM
21
5
0
26 Dec 2021
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Rahul Parhi
Robert D. Nowak
61
38
0
18 Sep 2021
Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning
Lin Liu
Rajarshi Mukherjee
J. M. Robins
CML
38
16
0
07 Aug 2020
1