ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14263
7
0

Towards Robust Learning to Optimize with Theoretical Guarantees

17 June 2025
Qingyu Song
Wei Lin
Juncheng Wang
Hong Xu
ArXiv (abs)PDFHTML
Main:8 Pages
19 Figures
Bibliography:1 Pages
2 Tables
Appendix:55 Pages
Abstract

Learning to optimize (L2O) is an emerging technique to solve mathematical optimization problems with learning-based methods. Although with great success in many real-world scenarios such as wireless communications, computer networks, and electronic design, existing L2O works lack theoretical demonstration of their performance and robustness in out-of-distribution (OOD) scenarios. We address this gap by providing comprehensive proofs. First, we prove a sufficient condition for a robust L2O model with homogeneous convergence rates over all In-Distribution (InD) instances. We assume an L2O model achieves robustness for an InD scenario. Based on our proposed methodology of aligning OOD problems to InD problems, we also demonstrate that the L2O model's convergence rate in OOD scenarios will deteriorate by an equation of the L2O model's input features. Moreover, we propose an L2O model with a concise gradient-only feature construction and a novel gradient-based history modeling method. Numerical simulation demonstrates that our proposed model outperforms the state-of-the-art baseline in both InD and OOD scenarios and achieves up to 10 ×\times× convergence speedup. The code of our method can be found fromthis https URL.

View on arXiv
@article{song2025_2506.14263,
  title={ Towards Robust Learning to Optimize with Theoretical Guarantees },
  author={ Qingyu Song and Wei Lin and Juncheng Wang and Hong Xu },
  journal={arXiv preprint arXiv:2506.14263},
  year={ 2025 }
}
Comments on this paper