From General to Targeted Rewards: Surpassing GPT-4 in Open-Ended Long-Context Generation
- ALMLRM

Current research on long-form context in Large Language Models (LLMs) primarily focuses on the understanding of long-contexts, the Open-ended Long Text Generation (Open-LTG) remains insufficiently explored. Training a long-context generation model requires curation of gold standard reference data, which is typically nonexistent for informative Open-LTG tasks. However, previous methods only utilize general assessments as reward signals, which limits accuracy. To bridge this gap, we introduce ProxyReward, an innovative reinforcement learning (RL) based framework, which includes a dataset and a reward signal computation method. Firstly, ProxyReward Dataset generation is accomplished through simple prompts that enables the model to create automatically, obviating extensive labeled data or significant manual effort. Secondly, ProxyReward Signal offers a targeted evaluation of information comprehensiveness and accuracy for specific questions. The experimental results indicate that our method ProxyReward surpasses even GPT-4-Turbo. It can significantly enhance performance by 20% on the Open-LTG task when training widely used open-source models, while also surpassing the LLM-as-a-Judge approach. Our work presents effective methods to enhance the ability of LLMs to address complex open-ended questions posed by human.
View on arXiv@article{guo2025_2506.16024, title={ From General to Targeted Rewards: Surpassing GPT-4 in Open-Ended Long-Context Generation }, author={ Zhihan Guo and Jiele Wu and Wenqian Cui and Yifei Zhang and Minda Hu and Yufei Wang and Irwin King }, journal={arXiv preprint arXiv:2506.16024}, year={ 2025 } }