ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03553
48
0

Agentic Knowledgeable Self-awareness

4 April 2025
Shuofei Qiao
Zihan Qiu
Baochang Ren
Xiaobin Wang
Xiangyuan Ru
N. Zhang
Xiang Chen
Yong-feng Jiang
Pengjun Xie
Fei Huang
H. Chen
    LLMAG
    LM&Ro
    AIFin
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have achieved considerable performance across various agentic planning tasks. However, traditional agent planning approaches adopt a "flood irrigation" methodology that indiscriminately injects gold trajectories, external feedback, and domain knowledge into agent models. This practice overlooks the fundamental human cognitive principle of situational self-awareness during decision-making-the ability to dynamically assess situational demands and strategically employ resources during decision-making. We propose agentic knowledgeable self-awareness to address this gap, a novel paradigm enabling LLM-based agents to autonomously regulate knowledge utilization. Specifically, we propose KnowSelf, a data-centric approach that applies agents with knowledgeable self-awareness like humans. Concretely, we devise a heuristic situation judgement criterion to mark special tokens on the agent's self-explored trajectories for collecting training data. Through a two-stage training process, the agent model can switch between different situations by generating specific special tokens, achieving optimal planning effects with minimal costs. Our experiments demonstrate that KnowSelf can outperform various strong baselines on different tasks and models with minimal use of external knowledge. Code is available atthis https URL.

View on arXiv
@article{qiao2025_2504.03553,
  title={ Agentic Knowledgeable Self-awareness },
  author={ Shuofei Qiao and Zhisong Qiu and Baochang Ren and Xiaobin Wang and Xiangyuan Ru and Ningyu Zhang and Xiang Chen and Yong Jiang and Pengjun Xie and Fei Huang and Huajun Chen },
  journal={arXiv preprint arXiv:2504.03553},
  year={ 2025 }
}
Comments on this paper