ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.05017
  4. Cited By
Towards Mitigating Hallucinations in Large Vision-Language Models by Refining Textual Embeddings

Towards Mitigating Hallucinations in Large Vision-Language Models by Refining Textual Embeddings

7 November 2025
Aakriti Agrawal
Gouthaman KV
R. Aralikatti
Gauri Jagatap
Jiaxin Yuan
Vijay Kamarshi
Andrea Fanelli
Furong Huang
    VLM
ArXiv (abs)PDFHTMLHuggingFace (7 upvotes)

Papers citing "Towards Mitigating Hallucinations in Large Vision-Language Models by Refining Textual Embeddings"

0 / 0 papers shown
Title

No papers found