ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15076
12
0

Agentic Feature Augmentation: Unifying Selection and Generation with Teaming, Planning, and Memories

21 May 2025
Nanxu Gong
Sixun Dong
Haoyue Bai
Xinyuan Wang
Wangyang Ying
Yanjie Fu
ArXivPDFHTML
Abstract

As a widely-used and practical tool, feature engineering transforms raw data into discriminative features to advance AI model performance. However, existing methods usually apply feature selection and generation separately, failing to strive a balance between reducing redundancy and adding meaningful dimensions. To fill this gap, we propose an agentic feature augmentation concept, where the unification of feature generation and selection is modeled as agentic teaming and planning. Specifically, we develop a Multi-Agent System with Long and Short-Term Memory (MAGS), comprising a selector agent to eliminate redundant features, a generator agent to produce informative new dimensions, and a router agent that strategically coordinates their actions. We leverage in-context learning with short-term memory for immediate feedback refinement and long-term memory for globally optimal guidance. Additionally, we employ offline Proximal Policy Optimization (PPO) reinforcement fine-tuning to train the router agent for effective decision-making to navigate a vast discrete feature space. Extensive experiments demonstrate that this unified agentic framework consistently achieves superior task performance by intelligently orchestrating feature selection and generation.

View on arXiv
@article{gong2025_2505.15076,
  title={ Agentic Feature Augmentation: Unifying Selection and Generation with Teaming, Planning, and Memories },
  author={ Nanxu Gong and Sixun Dong and Haoyue Bai and Xinyuan Wang and Wangyang Ying and Yanjie Fu },
  journal={arXiv preprint arXiv:2505.15076},
  year={ 2025 }
}
Comments on this paper