ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.06248
136
1
v1v2 (latest)

Utility-inspired Reward Transformations Improve Reinforcement Learning Training of Language Models

8 January 2025
Roberto-Rafael Maura-Rivero
Chirag Nagpal
Roma Patel
Francesco Visin
ArXiv (abs)PDFHTML
Abstract

Current methods that train large language models (LLMs) with reinforcement learning feedback, often resort to averaging outputs of multiple rewards functions during training. This overlooks crucial aspects of individual reward dimensions and inter-reward dependencies that can lead to sub-optimal outcomes in generations. In this work, we show how linear aggregation of rewards exhibits some vulnerabilities that can lead to undesired properties of generated text. We then propose a transformation of reward functions inspired by economic theory of utility functions (specifically Inada conditions), that enhances sensitivity to low reward values while diminishing sensitivity to already high values. We compare our approach to the existing baseline methods that linearly aggregate rewards and show how the Inada-inspired reward feedback is superior to traditional weighted averaging. We quantitatively and qualitatively analyse the difference in the methods, and see that models trained with Inada-transformations score as more helpful while being less harmful.

View on arXiv
@article{maura-rivero2025_2501.06248,
  title={ Utility-inspired Reward Transformations Improve Reinforcement Learning Training of Language Models },
  author={ Roberto-Rafael Maura-Rivero and Chirag Nagpal and Roma Patel and Francesco Visin },
  journal={arXiv preprint arXiv:2501.06248},
  year={ 2025 }
}
Comments on this paper