ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.06786
  4. Cited By
Quantifying the visual concreteness of words and topics in multimodal
  datasets

Quantifying the visual concreteness of words and topics in multimodal datasets

18 April 2018
Jack Hessel
David M. Mimno
Lillian Lee
ArXivPDFHTML

Papers citing "Quantifying the visual concreteness of words and topics in multimodal datasets"

6 / 6 papers shown
Title
Multi-Modal Framing Analysis of News
Multi-Modal Framing Analysis of News
Arnav Arora
Srishti Yadav
Maria Antoniak
Serge J. Belongie
Isabelle Augenstein
50
0
0
26 Mar 2025
Mitigating Open-Vocabulary Caption Hallucinations
Mitigating Open-Vocabulary Caption Hallucinations
Assaf Ben-Kish
Moran Yanuka
Morris Alper
Raja Giryes
Hadar Averbuch-Elor
MLLM
VLM
20
6
0
06 Dec 2023
Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual
  Representations
Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations
Josh Beal
Hao Wu
Dong Huk Park
Andrew Zhai
Dmitry Kislyuk
ViT
15
29
0
12 Aug 2021
Visual Pivoting for (Unsupervised) Entity Alignment
Visual Pivoting for (Unsupervised) Entity Alignment
Fangyu Liu
Muhao Chen
Dan Roth
Nigel Collier
OCL
21
117
0
28 Sep 2020
Multimodal Grounding for Language Processing
Multimodal Grounding for Language Processing
Lisa Beinborn
Teresa Botschen
Iryna Gurevych
19
32
0
17 Jun 2018
A Multi-View Embedding Space for Modeling Internet Images, Tags, and
  their Semantics
A Multi-View Embedding Space for Modeling Internet Images, Tags, and their Semantics
Yunchao Gong
Qifa Ke
Michael Isard
Svetlana Lazebnik
3DV
76
584
0
18 Dec 2012
1