ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07087
26
0

Architectural Precedents for General Agents using Large Language Models

11 May 2025
R. Wray
James R. Kirk
John E. Laird
    LLMAG
    AI4TS
    AI4CE
ArXivPDFHTML
Abstract

One goal of AI (and AGI) is to identify and understand specific mechanisms and representations sufficient for general intelligence. Often, this work manifests in research focused on architectures and many cognitive architectures have been explored in AI/AGI. However, different research groups and even different research traditions have somewhat independently identified similar/common patterns of processes and representations or cognitive design patterns that are manifest in existing architectures. Today, AI systems exploiting large language models (LLMs) offer a relatively new combination of mechanism and representation available for exploring the possibilities of general intelligence. In this paper, we summarize a few recurring cognitive design patterns that have appeared in various pre-transformer AI architectures. We then explore how these patterns are evident in systems using LLMs, especially for reasoning and interactive ("agentic") use cases. By examining and applying these recurring patterns, we can also predict gaps or deficiencies in today's Agentic LLM Systems and identify likely subjects of future research towards general intelligence using LLMs and other generative foundation models.

View on arXiv
@article{wray2025_2505.07087,
  title={ Architectural Precedents for General Agents using Large Language Models },
  author={ Robert E. Wray and James R. Kirk and John E. Laird },
  journal={arXiv preprint arXiv:2505.07087},
  year={ 2025 }
}
Comments on this paper