Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.01678
Cited By
What is Learned in Visually Grounded Neural Syntax Acquisition
4 May 2020
Noriyuki Kojima
Hadar Averbuch-Elor
Alexander M. Rush
Yoav Artzi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What is Learned in Visually Grounded Neural Syntax Acquisition"
7 / 7 papers shown
Title
Kiki or Bouba? Sound Symbolism in Vision-and-Language Models
Morris Alper
Hadar Averbuch-Elor
48
10
0
25 Oct 2023
A Joint Study of Phrase Grounding and Task Performance in Vision and Language Models
Noriyuki Kojima
Hadar Averbuch-Elor
Yoav Artzi
34
2
0
06 Sep 2023
Re-evaluating the Need for Multimodal Signals in Unsupervised Grammar Induction
Boyi Li
Rodolfo Corona
K. Mangalam
Catherine Chen
Daniel Flaherty
Serge Belongie
Kilian Q. Weinberger
Jitendra Malik
Trevor Darrell
Dan Klein
21
1
0
20 Dec 2022
Dependency Induction Through the Lens of Visual Perception
Ruisi Su
Shruti Rijhwani
Hao Zhu
Junxian He
Xinyu Wang
Yonatan Bisk
Graham Neubig
41
2
0
20 Sep 2021
KANDINSKYPatterns -- An experimental exploration environment for Pattern Analysis and Machine Intelligence
Andreas Holzinger
Anna Saranti
Heimo Mueller
46
10
0
28 Feb 2021
Visually Grounded Compound PCFGs
Yanpeng Zhao
Ivan Titov
29
43
0
25 Sep 2020
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
Akira Fukui
Dong Huk Park
Daylen Yang
Anna Rohrbach
Trevor Darrell
Marcus Rohrbach
167
1,465
0
06 Jun 2016
1