ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10509
49
0

SySLLM: Generating Synthesized Policy Summaries for Reinforcement Learning Agents Using Large Language Models

13 March 2025
Sahar Admoni
Omer Ben-Porat
Ofra Amir
    LLMAG
ArXivPDFHTML
Abstract

Policies generated by Reinforcement Learning (RL) algorithms can be difficult to describe to users, as they result from the interplay between complex reward structures and neural network-based representations. This combination often leads to unpredictable behaviors, making policies challenging to analyze and posing significant obstacles to fostering human trust in real-world applications. Global policy summarization methods aim to describe agent behavior through a demonstration of actions in a subset of world-states. However, users can only watch a limited number of demonstrations, restricting their understanding of policies. Moreover, those methods overly rely on user interpretation, as they do not synthesize observations into coherent patterns. In this work, we present SySLLM (Synthesized Summary using LLMs), a novel method that employs synthesis summarization, utilizing large language models' (LLMs) extensive world knowledge and ability to capture patterns, to generate textual summaries of policies. Specifically, an expert evaluation demonstrates that the proposed approach generates summaries that capture the main insights generated by experts while not resulting in significant hallucinations. Additionally, a user study shows that SySLLM summaries are preferred over demonstration-based policy summaries and match or surpass their performance in objective agent identification tasks.

View on arXiv
@article{admoni2025_2503.10509,
  title={ SySLLM: Generating Synthesized Policy Summaries for Reinforcement Learning Agents Using Large Language Models },
  author={ Sahar Admoni and Omer Ben-Porat and Ofra Amir },
  journal={arXiv preprint arXiv:2503.10509},
  year={ 2025 }
}
Comments on this paper