ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09153
  4. Cited By

Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection

12 March 2025
Chaowei Zhang
Zongling Feng
Zewei Zhang
Jipeng Qiang
Guandong Xu
Yun Li
    LRM
ArXivPDFHTML

Papers citing "Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection"

Title
No papers