Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.12248
Cited By
Brain encoding models based on multimodal transformers can transfer across language and vision
20 May 2023
Jerry Tang
Meng Du
Vy A. Vo
Vasudev Lal
Alexander G. Huth
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Brain encoding models based on multimodal transformers can transfer across language and vision"
9 / 9 papers shown
Title
Do Large Language Models know who did what to whom?
Joseph M. Denning
Xiaohan
Bryor Snefjella
Idan A. Blank
222
1
0
23 Apr 2025
The Wisdom of a Crowd of Brains: A Universal Brain Encoder
Roman Beliy
Navve Wasserman
Amit Zalcher
Michal Irani
75
2
0
18 Jun 2024
Multimodal foundation models are better simulators of the human brain
Haoyu Lu
Qiongyi Zhou
Nanyi Fei
Zhiwu Lu
Mingyu Ding
...
Changde Du
Xin Zhao
Haoran Sun
Huiguang He
J. Wen
AI4CE
72
13
0
17 Aug 2022
ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision
Wonjae Kim
Bokyung Son
Ildoo Kim
VLM
CLIP
128
1,749
0
05 Feb 2021
Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers
Lisa Anne Hendricks
John F. J. Mellor
R. Schneider
Jean-Baptiste Alayrac
Aida Nematzadeh
125
116
0
31 Jan 2021
LXMERT: Learning Cross-Modality Encoder Representations from Transformers
Hao Hao Tan
Joey Tianyi Zhou
VLM
MLLM
247
2,488
0
20 Aug 2019
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
Jiasen Lu
Dhruv Batra
Devi Parikh
Stefan Lee
SSL
VLM
231
3,693
0
06 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
668
24,528
0
26 Jul 2019
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
Mariya Toneva
Leila Wehbe
MILM
AI4CE
75
229
0
28 May 2019
1