ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02762
  4. Cited By
Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations
v1v2 (latest)

Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations

3 October 2024
Nick Jiang
Anish Kachinthaya
Suzie Petryk
Yossi Gandelsman
    VLM
ArXiv (abs)PDFHTML

Papers citing "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"

5 / 55 papers shown
Title
Understanding the Role of Individual Units in a Deep Neural Network
Understanding the Role of Individual Units in a Deep Neural Network
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
GAN
75
453
0
10 Sep 2020
A Disentangling Invertible Interpretation Network for Explaining Latent
  Representations
A Disentangling Invertible Interpretation Network for Explaining Latent Representations
Patrick Esser
Robin Rombach
Bjorn Ommer
60
88
0
27 Apr 2020
Object Hallucination in Image Captioning
Object Hallucination in Image Captioning
Anna Rohrbach
Lisa Anne Hendricks
Kaylee Burns
Trevor Darrell
Kate Saenko
204
443
0
06 Sep 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
188
1,176
0
19 Jun 2018
Microsoft COCO: Common Objects in Context
Microsoft COCO: Common Objects in Context
Nayeon Lee
Michael Maire
Serge J. Belongie
Lubomir Bourdev
Ross B. Girshick
James Hays
Pietro Perona
Deva Ramanan
C. L. Zitnick
Piotr Dollár
ObjD
442
43,875
0
01 May 2014
Previous
12