ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21978
14
0

Two-Stage Feature Generation with Transformer and Reinforcement Learning

28 May 2025
Wanfu Gao
Zengyao Man
Zebin He
Yuhao Tang
Jun Gao
Kunpeng Liu
ArXiv (abs)PDFHTML
Main:7 Pages
4 Figures
Bibliography:2 Pages
4 Tables
Abstract

Feature generation is a critical step in machine learning, aiming to enhance model performance by capturing complex relationships within the data and generating meaningful new features. Traditional feature generation methods heavily rely on domain expertise and manual intervention, making the process labor-intensive and challenging to adapt to different scenarios. Although automated feature generation techniques address these issues to some extent, they often face challenges such as feature redundancy, inefficiency in feature space exploration, and limited adaptability to diverse datasets and tasks. To address these problems, we propose a Two-Stage Feature Generation (TSFG) framework, which integrates a Transformer-based encoder-decoder architecture with Proximal Policy Optimization (PPO). The encoder-decoder model in TSFG leverages the Transformer's self-attention mechanism to efficiently represent and transform features, capturing complex dependencies within the data. PPO further enhances TSFG by dynamically adjusting the feature generation strategy based on task-specific feedback, optimizing the process for improved performance and adaptability. TSFG dynamically generates high-quality feature sets, significantly improving the predictive performance of machine learning models. Experimental results demonstrate that TSFG outperforms existing state-of-the-art methods in terms of feature quality and adaptability.

View on arXiv
@article{gao2025_2505.21978,
  title={ Two-Stage Feature Generation with Transformer and Reinforcement Learning },
  author={ Wanfu Gao and Zengyao Man and Zebin He and Yuhao Tang and Jun Gao and Kunpeng Liu },
  journal={arXiv preprint arXiv:2505.21978},
  year={ 2025 }
}
Comments on this paper