ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15064
5
0

HiPreNets: High-Precision Neural Networks through Progressive Training

18 June 2025
Ethan Mulle
Wei Kang
Qi Gong
ArXiv (abs)PDFHTML
Main:23 Pages
13 Figures
Bibliography:3 Pages
10 Tables
Abstract

Deep neural networks are powerful tools for solving nonlinear problems in science and engineering, but training highly accurate models becomes challenging as problem complexity increases. Non-convex optimization and numerous hyperparameters to tune make performance improvement difficult, and traditional approaches often prioritize minimizing mean squared error (MSE) while overlooking L∞L^{\infty}L∞ error, which is the critical focus in many applications. To address these challenges, we present a progressive framework for training and tuning high-precision neural networks (HiPreNets). Our approach refines a previously explored staged training technique for neural networks that improves an existing fully connected neural network by sequentially learning its prediction residuals using additional networks, leading to improved overall accuracy. We discuss how to take advantage of the structure of the residuals to guide the choice of loss function, number of parameters to use, and ways to introduce adaptive data sampling techniques. We validate our framework's effectiveness through several benchmark problems.

View on arXiv
@article{mulle2025_2506.15064,
  title={ HiPreNets: High-Precision Neural Networks through Progressive Training },
  author={ Ethan Mulle and Wei Kang and Qi Gong },
  journal={arXiv preprint arXiv:2506.15064},
  year={ 2025 }
}
Comments on this paper