ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.04363
47
7

AriGraph: Learning Knowledge Graph World Models with Episodic Memory for LLM Agents

5 July 2024
Petr Anokhin
Nikita Semenov
Artyom Sorokin
Dmitry Evseev
Mikhail Burtsev
Evgeny Burnaev
Evgeny Burnaev
    LLMAG
    RALM
    KELM
ArXivPDFHTML
Abstract

Advancements in the capabilities of Large Language Models (LLMs) have created a promising foundation for developing autonomous agents. With the right tools, these agents could learn to solve tasks in new environments by accumulating and updating their knowledge. Current LLM-based agents process past experiences using a full history of observations, summarization, retrieval augmentation. However, these unstructured memory representations do not facilitate the reasoning and planning essential for complex decision-making. In our study, we introduce AriGraph, a novel method wherein the agent constructs and updates a memory graph that integrates semantic and episodic memories while exploring the environment. We demonstrate that our Ariadne LLM agent, consisting of the proposed memory architecture augmented with planning and decision-making, effectively handles complex tasks within interactive text game environments difficult even for human players. Results show that our approach markedly outperforms other established memory methods and strong RL baselines in a range of problems of varying complexity. Additionally, AriGraph demonstrates competitive performance compared to dedicated knowledge graph-based methods in static multi-hop question-answering.

View on arXiv
@article{anokhin2025_2407.04363,
  title={ AriGraph: Learning Knowledge Graph World Models with Episodic Memory for LLM Agents },
  author={ Petr Anokhin and Nikita Semenov and Artyom Sorokin and Dmitry Evseev and Andrey Kravchenko and Mikhail Burtsev and Evgeny Burnaev },
  journal={arXiv preprint arXiv:2407.04363},
  year={ 2025 }
}
Comments on this paper