Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1511.07067
Cited By
Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract Scenes
22 November 2015
Satwik Kottur
Ramakrishna Vedantam
José M. F. Moura
Devi Parikh
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract Scenes"
10 / 10 papers shown
Title
GOAL: Global-local Object Alignment Learning
Hyungyu Choi
Young Kyun Jang
Chanho Eom
VLM
162
0
0
22 Mar 2025
Kiki or Bouba? Sound Symbolism in Vision-and-Language Models
Morris Alper
Hadar Averbuch-Elor
46
10
0
25 Oct 2023
COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning
Simon Ging
Mohammadreza Zolfaghari
Hamed Pirsiavash
Thomas Brox
ViT
CLIP
26
168
0
01 Nov 2020
Personalizing Fast-Forward Videos Based on Visual and Textual Features from Social Network
W. Ramos
M. Silva
Edson Roteia Araujo Junior
Alan C. Neves
Erickson R. Nascimento
22
6
0
29 Dec 2019
MULE: Multimodal Universal Language Embedding
Donghyun Kim
Kuniaki Saito
Kate Saenko
Stan Sclaroff
Bryan A. Plummer
VLM
32
40
0
08 Sep 2019
Word2vec to behavior: morphology facilitates the grounding of language in machines
David Matthews
Sam Kriegman
C. Cappelle
Josh Bongard
LM&Ro
19
6
0
03 Aug 2019
Wasserstein Barycenter Model Ensembling
Pierre L. Dognin
Igor Melnyk
Youssef Mroueh
Jerret Ross
Cicero Nogueira dos Santos
Tom Sercu
30
24
0
13 Feb 2019
Learning Robust Visual-Semantic Embeddings
Yao-Hung Hubert Tsai
Liang-Kang Huang
Ruslan Salakhutdinov
SSL
AI4TS
27
166
0
17 Mar 2017
Sound-Word2Vec: Learning Word Representations Grounded in Sounds
Ashwin K. Vijayakumar
Ramakrishna Vedantam
Devi Parikh
22
22
0
06 Mar 2017
Multilingual Visual Sentiment Concept Matching
Nikolaos Pappas
Miriam Redi
Mercan Topkara
Brendan Jou
Hongyi Liu
Tao Chen
Shih-Fu Chang
CVBM
21
14
0
07 Jun 2016
1