ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20521
24
0

Project Riley: Multimodal Multi-Agent LLM Collaboration with Emotional Reasoning and Voting

26 May 2025
Ana Rita Ortigoso
Gabriel Vieira
Daniel Fuentes
Luís Frazão
Nuno Costa
António Pereira
ArXivPDFHTML
Abstract

This paper presents Project Riley, a novel multimodal and multi-model conversational AI architecture oriented towards the simulation of reasoning influenced by emotional states. Drawing inspiration from Pixar's Inside Out, the system comprises five distinct emotional agents - Joy, Sadness, Fear, Anger, and Disgust - that engage in structured multi-round dialogues to generate, criticise, and iteratively refine responses. A final reasoning mechanism synthesises the contributions of these agents into a coherent output that either reflects the dominant emotion or integrates multiple perspectives. The architecture incorporates both textual and visual large language models (LLMs), alongside advanced reasoning and self-refinement processes. A functional prototype was deployed locally in an offline environment, optimised for emotional expressiveness and computational efficiency. From this initial prototype, another one emerged, called Armando, which was developed for use in emergency contexts, delivering emotionally calibrated and factually accurate information through the integration of Retrieval-Augmented Generation (RAG) and cumulative context tracking. The Project Riley prototype was evaluated through user testing, in which participants interacted with the chatbot and completed a structured questionnaire assessing three dimensions: Emotional Appropriateness, Clarity and Utility, and Naturalness and Human-likeness. The results indicate strong performance in structured scenarios, particularly with respect to emotional alignment and communicative clarity.

View on arXiv
@article{ortigoso2025_2505.20521,
  title={ Project Riley: Multimodal Multi-Agent LLM Collaboration with Emotional Reasoning and Voting },
  author={ Ana Rita Ortigoso and Gabriel Vieira and Daniel Fuentes and Luis Frazão and Nuno Costa and António Pereira },
  journal={arXiv preprint arXiv:2505.20521},
  year={ 2025 }
}
Comments on this paper