ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11462
53
1

Make Optimization Once and for All with Fine-grained Guidance

14 March 2025
Mingjia Shi
Ruihan Lin
Xuxi Chen
Yuhao Zhou
Zezhen Ding
Pingzhi Li
Tong Wang
Kai Wang
Zhangyang Wang
J. Zhang
Tianlong Chen
ArXivPDFHTML
Abstract

Learning to Optimize (L2O) enhances optimization efficiency with integrated neural networks. L2O paradigms achieve great outcomes, e.g., refitting optimizer, generating unseen solutions iteratively or directly. However, conventional L2O methods require intricate design and rely on specific optimization processes, limiting scalability and generalization. Our analyses explore general framework for learning optimization, called Diff-L2O, focusing on augmenting sampled solutions from a wider view rather than local updates in real optimization process only. Meanwhile, we give the related generalization bound, showing that the sample diversity of Diff-L2O brings better performance. This bound can be simply applied to other fields, discussing diversity, mean-variance, and different tasks. Diff-L2O's strong compatibility is empirically verified with only minute-level training, comparing with other hour-levels.

View on arXiv
@article{shi2025_2503.11462,
  title={ Make Optimization Once and for All with Fine-grained Guidance },
  author={ Mingjia Shi and Ruihan Lin and Xuxi Chen and Yuhao Zhou and Zezhen Ding and Pingzhi Li and Tong Wang and Kai Wang and Zhangyang Wang and Jiheng Zhang and Tianlong Chen },
  journal={arXiv preprint arXiv:2503.11462},
  year={ 2025 }
}
Comments on this paper