ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.05317
  4. Cited By
Intrinsic dimensionality and generalization properties of the
  $\mathcal{R}$-norm inductive bias

Intrinsic dimensionality and generalization properties of the R\mathcal{R}R-norm inductive bias

10 June 2022
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
    CML
    AI4CE
ArXivPDFHTML

Papers citing "Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias"

10 / 10 papers shown
Title
The Effects of Multi-Task Learning on ReLU Neural Network Functions
The Effects of Multi-Task Learning on ReLU Neural Network Functions
Julia B. Nakhleh
Joseph Shenouda
Robert D. Nowak
39
1
0
29 Oct 2024
Emergence in non-neural models: grokking modular arithmetic via average
  gradient outer product
Emergence in non-neural models: grokking modular arithmetic via average gradient outer product
Neil Rohit Mallinar
Daniel Beaglehole
Libin Zhu
Adityanarayanan Radhakrishnan
Parthe Pandit
Misha Belkin
51
7
0
29 Jul 2024
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
Suzanna Parkinson
Greg Ongie
Rebecca Willett
68
6
0
24 May 2023
Penalising the biases in norm regularisation enforces sparsity
Penalising the biases in norm regularisation enforces sparsity
Etienne Boursier
Nicolas Flammarion
40
14
0
02 Mar 2023
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
170
68
0
27 Oct 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
324
48
0
29 Sep 2022
Ridgeless Interpolation with Shallow ReLU Networks in $1D$ is Nearest
  Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz
  Functions
Ridgeless Interpolation with Shallow ReLU Networks in 1D1D1D is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions
Boris Hanin
MLT
38
9
0
27 Sep 2021
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Rahul Parhi
Robert D. Nowak
56
38
0
18 Sep 2021
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
151
602
0
14 Feb 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,145
0
06 Jun 2015
1