ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15572
12
0

Bridging the Domain Gap in Equation Distillation with Reinforcement Feedback

21 May 2025
Wangyang Ying
Haoyue Bai
Nanxu Gong
Xinyuan Wang
Sixun Dong
Haifeng Chen
Yanjie Fu
ArXivPDFHTML
Abstract

The data-to-equation (Data2Eqn) task aims to discover interpretable mathematical equations that map observed values to labels, offering physical insights and broad applicability across academic and industrial domains. Genetic programming and traditional deep learning-based approaches suffer from search inefficiency and poor generalization on small task-specific datasets. Foundation models showed promise in this area, but existing approaches suffer from: 1) They are pretrained on general-purpose data distributions, making them less effective for domain-specific tasks; and 2) their training objectives focus on token-level alignment, overlooking mathematical semantics, which can lead to inaccurate equations. To address these issues, we aim to enhance the domain adaptability of foundation models for Data2Eqn tasks. In this work, we propose a reinforcement learning-based finetuning framework that directly optimizes the generation policy of a pretrained model through reward signals derived from downstream numerical fitness. Our method allows the model to adapt to specific and complex data distributions and generate mathematically meaningful equations. Extensive experiments demonstrate that our approach improves both the accuracy and robustness of equation generation under complex distributions.

View on arXiv
@article{ying2025_2505.15572,
  title={ Bridging the Domain Gap in Equation Distillation with Reinforcement Feedback },
  author={ Wangyang Ying and Haoyue Bai and Nanxu Gong and Xinyuan Wang and Sixun Dong and Haifeng Chen and Yanjie Fu },
  journal={arXiv preprint arXiv:2505.15572},
  year={ 2025 }
}
Comments on this paper