5
0

Sequential Policy Gradient for Adaptive Hyperparameter Optimization

Zheng Li
Jerry Cheng
Huanying Helen Gu
Main:10 Pages
4 Figures
Bibliography:4 Pages
4 Tables
Abstract

Reinforcement learning is essential for neural architecture search and hyperparameter optimization, but the conventional approaches impede widespread use due to prohibitive time and computational costs. Inspired by DeepSeek-V3 multi-token prediction architecture, we propose Sequential Policy Gradient modeling (SPG), a novel trajectory generation paradigm for lightweight online hyperparameter optimization. In contrast to conventional policy gradient methods, SPG extends the base model with temporary modules, enabling it to generate state-action (padded) trajectories in a single forward pass. Our experiments demonstrate that models gain performance when retrained with SPG on their original datasets and also outperform standard transfer fine-tuning. We evaluate on five datasets spanning computer vision (ImageNet, COCO), natural language processing (GLUE, SQuAD), and audio (SUPERB) to assess the industrial applicability of SPG. The proposed method demonstrates consistent improvements across widely adopted models, achieving performance gains of +0.27%+0.2\sim7\%, with significantly low computational costs. Fully reproducible code and pre-trained models:this https URL.

View on arXiv
@article{li2025_2506.15051,
  title={ Sequential Policy Gradient for Adaptive Hyperparameter Optimization },
  author={ Zheng Li and Jerry Cheng and Huanying Helen Gu },
  journal={arXiv preprint arXiv:2506.15051},
  year={ 2025 }
}
Comments on this paper