ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.01446
7
0

Using multi-agent architecture to mitigate the risk of LLM hallucinations

2 July 2025
Abd Elrahman Amer
Magdi Amer
    LLMAG
ArXiv (abs)PDFHTML
Main:14 Pages
8 Figures
Abstract

Improving customer service quality and response time are critical factors for maintaining customer loyalty and increasing a company's market share. While adopting emerging technologies such as Large Language Models (LLMs) is becoming a necessity to achieve these goals, the risk of hallucination remains a major challenge. In this paper, we present a multi-agent system to handle customer requests sent via SMS. This system integrates LLM based agents with fuzzy logic to mitigate hallucination risks.

View on arXiv
@article{amer2025_2507.01446,
  title={ Using multi-agent architecture to mitigate the risk of LLM hallucinations },
  author={ Abd Elrahman Amer and Magdi Amer },
  journal={arXiv preprint arXiv:2507.01446},
  year={ 2025 }
}
Comments on this paper