ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.01274
  4. Cited By
Random Weight Factorization Improves the Training of Continuous Neural
  Representations

Random Weight Factorization Improves the Training of Continuous Neural Representations

3 October 2022
Sizhuang He
Hanwen Wang
Jacob H. Seidman
P. Perdikaris
ArXivPDFHTML

Papers citing "Random Weight Factorization Improves the Training of Continuous Neural Representations"

4 / 4 papers shown
Title
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
Variational Autoencoding Neural Operators
Variational Autoencoding Neural Operators
Jacob H. Seidman
Georgios Kissas
George J. Pappas
P. Perdikaris
DRL
AI4CE
27
7
0
20 Feb 2023
Improved architectures and training algorithms for deep operator
  networks
Improved architectures and training algorithms for deep operator networks
Sizhuang He
Hanwen Wang
P. Perdikaris
AI4CE
49
105
0
04 Oct 2021
On the eigenvector bias of Fourier feature networks: From regression to
  solving multi-scale PDEs with physics-informed neural networks
On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks
Sizhuang He
Hanwen Wang
P. Perdikaris
131
438
0
18 Dec 2020
1