ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12932
19
0

Complexity Scaling Laws for Neural Models using Combinatorial Optimization

15 June 2025
Lowell Weissman
Michael Krumdick
A. Lynn Abbott
ArXiv (abs)PDFHTML
Main:9 Pages
30 Figures
Bibliography:5 Pages
9 Tables
Appendix:31 Pages
Abstract

Recent work on neural scaling laws demonstrates that model performance scales predictably with compute budget, model size, and dataset size. In this work, we develop scaling laws based on problem complexity. We analyze two fundamental complexity measures: solution space size and representation space size. Using the Traveling Salesman Problem (TSP) as a case study, we show that combinatorial optimization promotes smooth cost trends, and therefore meaningful scaling laws can be obtained even in the absence of an interpretable loss. We then show that suboptimality grows predictably for fixed-size models when scaling the number of TSP nodes or spatial dimensions, independent of whether the model was trained with reinforcement learning or supervised fine-tuning on a static dataset. We conclude with an analogy to problem complexity scaling in local search, showing that a much simpler gradient descent of the cost landscape produces similar trends.

View on arXiv
@article{weissman2025_2506.12932,
  title={ Complexity Scaling Laws for Neural Models using Combinatorial Optimization },
  author={ Lowell Weissman and Michael Krumdick and A. Lynn Abbott },
  journal={arXiv preprint arXiv:2506.12932},
  year={ 2025 }
}
Comments on this paper