ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.03219
  4. Cited By
Content Moderation by LLM: From Accuracy to Legitimacy

Content Moderation by LLM: From Accuracy to Legitimacy

5 September 2024
Tao Huang
    AILaw
ArXivPDFHTML

Papers citing "Content Moderation by LLM: From Accuracy to Legitimacy"

3 / 3 papers shown
Title
FLAME: Flexible LLM-Assisted Moderation Engine
FLAME: Flexible LLM-Assisted Moderation Engine
Ivan Bakulin
Ilia Kopanichuk
Iaroslav Bespalov
Nikita Radchenko
V. Shaposhnikov
Dmitry V. Dylov
Ivan Oseledets
94
0
0
13 Feb 2025
Supporting Human Raters with the Detection of Harmful Content using
  Large Language Models
Supporting Human Raters with the Detection of Harmful Content using Large Language Models
Kurt Thomas
Patrick Gage Kelley
David Tao
Sarah Meiklejohn
Owen Vallis
Shunwen Tan
Blaz Bratanic
Felipe Tiengo Ferreira
Vijay Eranti
Elie Bursztein
38
2
0
18 Jun 2024
Stance Detection on Social Media with Fine-Tuned Large Language Models
Stance Detection on Social Media with Fine-Tuned Large Language Models
Ilker Gül
R. Lebret
Karl Aberer
19
7
0
18 Apr 2024
1