ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.07490
  4. Cited By
LXMERT: Learning Cross-Modality Encoder Representations from
  Transformers

LXMERT: Learning Cross-Modality Encoder Representations from Transformers

20 August 2019
Hao Hao Tan
Joey Tianyi Zhou
    VLM
    MLLM
ArXivPDFHTML

Papers citing "LXMERT: Learning Cross-Modality Encoder Representations from Transformers"

50 / 1,507 papers shown
Title
FSMR: A Feature Swapping Multi-modal Reasoning Approach with Joint
  Textual and Visual Clues
FSMR: A Feature Swapping Multi-modal Reasoning Approach with Joint Textual and Visual Clues
Shuang Li
Jiahua Wang
Lijie Wen
LRM
31
0
0
29 Mar 2024
Semantic Map-based Generation of Navigation Instructions
Semantic Map-based Generation of Navigation Instructions
Chengzu Li
Chao Zhang
Simone Teufel
R. Doddipatla
Svetlana Stoyanchev
34
2
0
28 Mar 2024
Text Data-Centric Image Captioning with Interactive Prompts
Text Data-Centric Image Captioning with Interactive Prompts
Yiyu Wang
Hao Luo
Jungang Xu
Yingfei Sun
Fan Wang
VLM
38
0
0
28 Mar 2024
Generative Multi-modal Models are Good Class-Incremental Learners
Generative Multi-modal Models are Good Class-Incremental Learners
Xusheng Cao
Haori Lu
Linlan Huang
Xialei Liu
Ming-Ming Cheng
CLL
46
10
0
27 Mar 2024
LLMs in HCI Data Work: Bridging the Gap Between Information Retrieval
  and Responsible Research Practices
LLMs in HCI Data Work: Bridging the Gap Between Information Retrieval and Responsible Research Practices
Neda Taghizadeh Serajeh
Iman Mohammadi
V. Fuccella
Mattia De Rosa
19
1
0
27 Mar 2024
m3P: Towards Multimodal Multilingual Translation with Multimodal Prompt
m3P: Towards Multimodal Multilingual Translation with Multimodal Prompt
Jian Yang
Hongcheng Guo
Yuwei Yin
Jiaqi Bai
Bing Wang
Jiaheng Liu
Xinnian Liang
Linzheng Cahi
Liqun Yang
Zhoujun Li
40
9
0
26 Mar 2024
UrbanVLP: Multi-Granularity Vision-Language Pretraining for Urban Socioeconomic Indicator Prediction
UrbanVLP: Multi-Granularity Vision-Language Pretraining for Urban Socioeconomic Indicator Prediction
Xixuan Hao
Wei Chen
Yibo Yan
Siru Zhong
Kun Wang
Qingsong Wen
Yuxuan Liang
VLM
79
1
0
25 Mar 2024
Opportunities and challenges in the application of large artificial
  intelligence models in radiology
Opportunities and challenges in the application of large artificial intelligence models in radiology
Liangrui Pan
Zhenyu Zhao
Ying Lu
Kewei Tang
Liyong Fu
Qingchun Liang
Shaoliang Peng
LM&MA
MedIm
AI4CE
45
5
0
24 Mar 2024
Temporal-Spatial Object Relations Modeling for Vision-and-Language
  Navigation
Temporal-Spatial Object Relations Modeling for Vision-and-Language Navigation
Bowen Huang
Yanwei Zheng
Chuanlin Lan
Xinpeng Zhao
Yifei Zou
Dongxiao Yu
36
0
0
23 Mar 2024
Not All Attention is Needed: Parameter and Computation Efficient
  Transfer Learning for Multi-modal Large Language Models
Not All Attention is Needed: Parameter and Computation Efficient Transfer Learning for Multi-modal Large Language Models
Qiong Wu
Weihao Ye
Yiyi Zhou
Xiaoshuai Sun
Rongrong Ji
MoE
49
1
0
22 Mar 2024
Volumetric Environment Representation for Vision-Language Navigation
Volumetric Environment Representation for Vision-Language Navigation
Rui Liu
Wenguan Wang
Yi Yang
34
25
0
21 Mar 2024
Grounding Spatial Relations in Text-Only Language Models
Grounding Spatial Relations in Text-Only Language Models
Gorka Azkune
Ander Salaberria
Eneko Agirre
42
0
0
20 Mar 2024
Improved Baselines for Data-efficient Perceptual Augmentation of LLMs
Improved Baselines for Data-efficient Perceptual Augmentation of LLMs
Théophane Vallaeys
Mustafa Shukor
Matthieu Cord
Jakob Verbeek
56
12
0
20 Mar 2024
As Firm As Their Foundations: Can open-sourced foundation models be used
  to create adversarial examples for downstream tasks?
As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?
Anjun Hu
Jindong Gu
Francesco Pinto
Konstantinos Kamnitsas
Philip Torr
AAML
SILM
37
5
0
19 Mar 2024
CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation
CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation
Wenqi Zhu
Jiale Cao
Jin Xie
Shuangming Yang
Yanwei Pang
VLM
CLIP
39
2
0
19 Mar 2024
Boosting Transferability in Vision-Language Attacks via Diversification
  along the Intersection Region of Adversarial Trajectory
Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory
Sensen Gao
Xiaojun Jia
Xuhong Ren
Ivor Tsang
Qing-Wu Guo
AAML
38
14
0
19 Mar 2024
Eye-gaze Guided Multi-modal Alignment for Medical Representation
  Learning
Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning
Chong Ma
Hanqi Jiang
Wenting Chen
Yiwei Li
Zihao Wu
...
Dajiang Zhu
Tuo Zhang
Dinggang Shen
Tianming Liu
Xiang Li
23
0
0
19 Mar 2024
Modality-Agnostic fMRI Decoding of Vision and Language
Modality-Agnostic fMRI Decoding of Vision and Language
Mitja Nikolaus
Milad Mozafari
Nicholas Asher
Leila Reddy
Rufin VanRullen
35
3
0
18 Mar 2024
QEAN: Quaternion-Enhanced Attention Network for Visual Dance Generation
QEAN: Quaternion-Enhanced Attention Network for Visual Dance Generation
Zhizhen Zhou
Yejing Huo
Guoheng Huang
An Zeng
Xuhang Chen
Lian Huang
Zinuo Li
37
7
0
18 Mar 2024
Hierarchical Spatial Proximity Reasoning for Vision-and-Language
  Navigation
Hierarchical Spatial Proximity Reasoning for Vision-and-Language Navigation
Ming Xu
Zilong Xie
35
2
0
18 Mar 2024
Mixture-of-Prompt-Experts for Multi-modal Semantic Understanding
Mixture-of-Prompt-Experts for Multi-modal Semantic Understanding
Zichen Wu
Hsiu-Yuan Huang
Fanyi Qu
Yunfang Wu
VLM
MoE
24
3
0
17 Mar 2024
Mind the Error! Detection and Localization of Instruction Errors in Vision-and-Language Navigation
Mind the Error! Detection and Localization of Instruction Errors in Vision-and-Language Navigation
Francesco Taioli
Stefano Rosa
A. Castellini
Lorenzo Natale
Alessio Del Bue
Alessandro Farinelli
Marco Cristani
Yiming Wang
41
5
0
15 Mar 2024
Multiscale Matching Driven by Cross-Modal Similarity Consistency for
  Audio-Text Retrieval
Multiscale Matching Driven by Cross-Modal Similarity Consistency for Audio-Text Retrieval
Qian Wang
Jia-Chen Gu
Zhen-Hua Ling
35
2
0
15 Mar 2024
Knowledge Condensation and Reasoning for Knowledge-based VQA
Knowledge Condensation and Reasoning for Knowledge-based VQA
Dongze Hao
Jian Jia
Longteng Guo
Qunbo Wang
Te Yang
...
Yanhua Cheng
Bo Wang
Quan Chen
Han Li
Jing Liu
44
0
0
15 Mar 2024
GET: Unlocking the Multi-modal Potential of CLIP for Generalized Category Discovery
GET: Unlocking the Multi-modal Potential of CLIP for Generalized Category Discovery
Enguang Wang
Zhimao Peng
Zhengyuan Xie
Fei Yang
Xialei Liu
Ming-Ming Cheng
62
3
0
15 Mar 2024
PosSAM: Panoptic Open-vocabulary Segment Anything
PosSAM: Panoptic Open-vocabulary Segment Anything
VS Vibashan
Shubhankar Borse
Hyojin Park
Debasmit Das
Vishal M. Patel
Munawar Hayat
Fatih Porikli
VLM
MLLM
43
6
0
14 Mar 2024
Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained
  Ship Classification
Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained Ship Classification
Long Lan
Fengxiang Wang
Shuyan Li
Xiangtao Zheng
Zengmao Wang
Xinwang Liu
VLM
31
7
0
13 Mar 2024
Towards Deviation-Robust Agent Navigation via Perturbation-Aware
  Contrastive Learning
Towards Deviation-Robust Agent Navigation via Perturbation-Aware Contrastive Learning
Bingqian Lin
Yanxin Long
Yi Zhu
Fengda Zhu
Xiaodan Liang
QiXiang Ye
Liang Lin
34
5
0
09 Mar 2024
Causality-based Cross-Modal Representation Learning for
  Vision-and-Language Navigation
Causality-based Cross-Modal Representation Learning for Vision-and-Language Navigation
Liuyi Wang
Zongtao He
Ronghao Dang
Huiyi Chen
Chengju Liu
Qi Chen
41
1
0
06 Mar 2024
Enhancing Conceptual Understanding in Multimodal Contrastive Learning
  through Hard Negative Samples
Enhancing Conceptual Understanding in Multimodal Contrastive Learning through Hard Negative Samples
Philipp J. Rösch
Norbert Oswald
Michaela Geierhos
Jindrich Libovický
42
3
0
05 Mar 2024
Modeling Collaborator: Enabling Subjective Vision Classification With
  Minimal Human Effort via LLM Tool-Use
Modeling Collaborator: Enabling Subjective Vision Classification With Minimal Human Effort via LLM Tool-Use
Imad Eddine Toubal
Aditya Avinash
N. Alldrin
Jan Dlabal
Wenlei Zhou
...
Chun-Ta Lu
Howard Zhou
Ranjay Krishna
Ariel Fuxman
Tom Duerig
VLM
75
7
0
05 Mar 2024
Adversarial Testing for Visual Grounding via Image-Aware Property
  Reduction
Adversarial Testing for Visual Grounding via Image-Aware Property Reduction
Zhiyuan Chang
Mingyang Li
Junjie Wang
Cheng Li
Boyu Wu
Fanjiang Xu
Qing Wang
AAML
36
0
0
02 Mar 2024
Semantics-enhanced Cross-modal Masked Image Modeling for Vision-Language
  Pre-training
Semantics-enhanced Cross-modal Masked Image Modeling for Vision-Language Pre-training
Haowei Liu
Yaya Shi
Haiyang Xu
Chunfen Yuan
Qinghao Ye
...
Mingshi Yan
Ji Zhang
Fei Huang
Bing Li
Weiming Hu
VLM
35
0
0
01 Mar 2024
Automatic Creative Selection with Cross-Modal Matching
Automatic Creative Selection with Cross-Modal Matching
Alex Kim
Jia Huang
Rob Monarch
Jerry Kwac
Anikesh Kamath
P. Khurd
Kailash Thiyagarajan
Goodman Gu
VLM
32
0
0
28 Feb 2024
Hierarchical Multimodal Pre-training for Visually Rich Webpage
  Understanding
Hierarchical Multimodal Pre-training for Visually Rich Webpage Understanding
Hongshen Xu
Lu Chen
Zihan Zhao
Da Ma
Ruisheng Cao
Zichen Zhu
Kai Yu
37
2
0
28 Feb 2024
Acquiring Linguistic Knowledge from Multimodal Input
Acquiring Linguistic Knowledge from Multimodal Input
Theodor Amariucai
Alexander Scott Warstadt
CLL
31
2
0
27 Feb 2024
Vision Transformers with Natural Language Semantics
Vision Transformers with Natural Language Semantics
Young-Kyung Kim
Matías Di Martino
Guillermo Sapiro
ViT
23
5
0
27 Feb 2024
Demonstrating and Reducing Shortcuts in Vision-Language Representation
  Learning
Demonstrating and Reducing Shortcuts in Vision-Language Representation Learning
Maurits J. R. Bleeker
Mariya Hendriksen
Andrew Yates
Maarten de Rijke
VLM
40
3
0
27 Feb 2024
Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion
  Approach for 3D VQA
Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQA
Wentao Mo
Yang Liu
24
6
0
24 Feb 2024
NaVid: Video-based VLM Plans the Next Step for Vision-and-Language
  Navigation
NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation
Jiazhao Zhang
Kunyu Wang
Rongtao Xu
Gengze Zhou
Yicong Hong
Xiaomeng Fang
Qi Wu
Zhizheng Zhang
Wang He
LM&Ro
40
45
0
24 Feb 2024
A Comprehensive Survey of Convolutions in Deep Learning: Applications,
  Challenges, and Future Trends
A Comprehensive Survey of Convolutions in Deep Learning: Applications, Challenges, and Future Trends
Abolfazl Younesi
Mohsen Ansari
Mohammadamin Fazli
A. Ejlali
Muhammad Shafique
Joerg Henkel
3DV
50
44
0
23 Feb 2024
CFIR: Fast and Effective Long-Text To Image Retrieval for Large Corpora
CFIR: Fast and Effective Long-Text To Image Retrieval for Large Corpora
Zijun Long
Xuri Ge
R. McCreadie
Joemon M. Jose
32
5
0
23 Feb 2024
Multimodal Transformer With a Low-Computational-Cost Guarantee
Multimodal Transformer With a Low-Computational-Cost Guarantee
Sungjin Park
Edward Choi
49
1
0
23 Feb 2024
SIMPLOT: Enhancing Chart Question Answering by Distilling Essentials
SIMPLOT: Enhancing Chart Question Answering by Distilling Essentials
Wonjoong Kim
S. Park
Yeonjun In
Seokwon Han
Chanyoung Park
LRM
ReLM
32
3
0
22 Feb 2024
DeiSAM: Segment Anything with Deictic Prompting
DeiSAM: Segment Anything with Deictic Prompting
Hikaru Shindo
Manuel Brack
Gopika Sudhakaran
Devendra Singh Dhami
P. Schramowski
Kristian Kersting
VLM
46
2
0
21 Feb 2024
WinoViz: Probing Visual Properties of Objects Under Different States
WinoViz: Probing Visual Properties of Objects Under Different States
Woojeong Jin
Tejas Srinivasan
Jesse Thomason
Xiang Ren
33
1
0
21 Feb 2024
MORE-3S:Multimodal-based Offline Reinforcement Learning with Shared
  Semantic Spaces
MORE-3S:Multimodal-based Offline Reinforcement Learning with Shared Semantic Spaces
Tianyu Zheng
Ge Zhang
Xingwei Qu
Ming Kuang
Stephen W. Huang
Zhaofeng He
OffRL
53
1
0
20 Feb 2024
II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in
  Visual Question Answering
II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question Answering
Jihyung Kil
Farideh Tavazoee
Dongyeop Kang
Joo-Kyung Kim
LRM
31
2
0
16 Feb 2024
Can Text-to-image Model Assist Multi-modal Learning for Visual
  Recognition with Visual Modality Missing?
Can Text-to-image Model Assist Multi-modal Learning for Visual Recognition with Visual Modality Missing?
Tiantian Feng
Daniel Yang
Digbalay Bose
Shrikanth Narayanan
37
4
0
14 Feb 2024
PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs
PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs
Michael Dorkenwald
Nimrod Barazani
Cees G. M. Snoek
Yuki M. Asano
VLM
MLLM
27
12
0
13 Feb 2024
Previous
123456...293031
Next