ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.02139
  4. Cited By
Rethinking Parameter Counting in Deep Models: Effective Dimensionality
  Revisited

Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited

4 March 2020
Wesley J. Maddox
Gregory W. Benton
A. Wilson
ArXivPDFHTML

Papers citing "Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited"

16 / 16 papers shown
Title
Just How Flexible are Neural Networks in Practice?
Just How Flexible are Neural Networks in Practice?
Ravid Shwartz-Ziv
Micah Goldblum
Arpit Bansal
C. Bayan Bruss
Yann LeCun
Andrew Gordon Wilson
43
4
0
17 Jun 2024
Gradients of Functions of Large Matrices
Gradients of Functions of Large Matrices
Nicholas Krämer
Pablo Moreno-Muñoz
Hrittik Roy
Søren Hauberg
40
0
0
27 May 2024
Leveraging Active Subspaces to Capture Epistemic Model Uncertainty in
  Deep Generative Models for Molecular Design
Leveraging Active Subspaces to Capture Epistemic Model Uncertainty in Deep Generative Models for Molecular Design
A. N. M. N. Abeer
Sanket R. Jantre
Nathan M. Urban
Byung-Jun Yoon
52
1
0
30 Apr 2024
Learning Active Subspaces for Effective and Scalable Uncertainty
  Quantification in Deep Neural Networks
Learning Active Subspaces for Effective and Scalable Uncertainty Quantification in Deep Neural Networks
Sanket R. Jantre
Nathan M. Urban
Xiaoning Qian
Byung-Jun Yoon
BDL
UQCV
26
4
0
06 Sep 2023
Bayesian Neural Networks for Geothermal Resource Assessment: Prediction
  with Uncertainty
Bayesian Neural Networks for Geothermal Resource Assessment: Prediction with Uncertainty
Stephen R. Brown
W. Rodi
Marco Seracini
Chengxi Gu
Michael Fehler
J. Faulds
Connor M. Smith
S. Treitel
23
0
0
30 Sep 2022
Why neural networks find simple solutions: the many regularizers of
  geometric complexity
Why neural networks find simple solutions: the many regularizers of geometric complexity
Benoit Dherin
Michael Munn
M. Rosca
David Barrett
55
31
0
27 Sep 2022
Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Javier Antorán
David Janz
J. Allingham
Erik A. Daxberger
Riccardo Barbano
Eric T. Nalisnick
José Miguel Hernández-Lobato
UQCV
BDL
30
28
0
17 Jun 2022
Adversarial robustness of sparse local Lipschitz predictors
Adversarial robustness of sparse local Lipschitz predictors
Ramchandran Muthukumar
Jeremias Sulam
AAML
32
13
0
26 Feb 2022
Bayesian Model Selection, the Marginal Likelihood, and Generalization
Bayesian Model Selection, the Marginal Likelihood, and Generalization
Sanae Lotfi
Pavel Izmailov
Gregory W. Benton
Micah Goldblum
A. Wilson
UQCV
BDL
52
56
0
23 Feb 2022
Rotationally Equivariant Super-Resolution of Velocity Fields in
  Two-Dimensional Fluids Using Convolutional Neural Networks
Rotationally Equivariant Super-Resolution of Velocity Fields in Two-Dimensional Fluids Using Convolutional Neural Networks
Y. Yasuda
R. Onishi
25
5
0
22 Feb 2022
Laplace Redux -- Effortless Bayesian Deep Learning
Laplace Redux -- Effortless Bayesian Deep Learning
Erik A. Daxberger
Agustinus Kristiadi
Alexander Immer
Runa Eschenhagen
Matthias Bauer
Philipp Hennig
BDL
UQCV
58
289
0
28 Jun 2021
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory W. Benton
Wesley J. Maddox
Sanae Lotfi
A. Wilson
UQCV
25
67
0
25 Feb 2021
Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit
  Constraints
Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints
Marc Finzi
Ke Alexander Wang
A. Wilson
AI4CE
34
126
0
26 Oct 2020
Deep Learning is Singular, and That's Good
Deep Learning is Singular, and That's Good
Daniel Murfet
Susan Wei
Biwei Huang
Hui Li
Jesse Gell-Redman
T. Quella
UQCV
24
26
0
22 Oct 2020
GShard: Scaling Giant Models with Conditional Computation and Automatic
  Sharding
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Dmitry Lepikhin
HyoukJoong Lee
Yuanzhong Xu
Dehao Chen
Orhan Firat
Yanping Huang
M. Krikun
Noam M. Shazeer
Z. Chen
MoE
31
1,108
0
30 Jun 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1