ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.00826
40
8

LLMGuard: Guarding Against Unsafe LLM Behavior

27 February 2024
Shubh Goyal
Medha Hira
Shubham Mishra
Sukriti Goyal
Arnav Goel
Niharika Dadu
DB Kirushikesh
Sameep Mehta
Nishtha Madaan
    AILaw
ArXivPDFHTML
Abstract

Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we present "LLMGuard", a tool that monitors user interactions with an LLM application and flags content against specific behaviours or conversation topics. To do this robustly, LLMGuard employs an ensemble of detectors.

View on arXiv
Comments on this paper