Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.08303
Cited By
MulT: An End-to-End Multitask Learning Transformer
17 May 2022
Deblina Bhattacharjee
Tong Zhang
Sabine Süsstrunk
Mathieu Salzmann
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MulT: An End-to-End Multitask Learning Transformer"
13 / 13 papers shown
Title
SGW-based Multi-Task Learning in Vision Tasks
Ruiyuan Zhang
Yuyao Chen
Yuchi Huo
Jiaxiang Liu
Dianbing Xi
Jie Liu
Chao Wu
28
0
0
03 Oct 2024
AutoTask: Task Aware Multi-Faceted Single Model for Multi-Task Ads Relevance
Shouchang Guo
Sonam Damani
Keng-hao Chang
37
0
0
09 Jul 2024
4M: Massively Multimodal Masked Modeling
David Mizrahi
Roman Bachmann
Ouguzhan Fatih Kar
Teresa Yeo
Mingfei Gao
Afshin Dehghan
Amir Zamir
MLLM
50
63
0
11 Dec 2023
PolyMaX: General Dense Prediction with Mask Transformer
Xuan S. Yang
Liangzhe Yuan
Kimberly Wilber
Astuti Sharma
Xiuye Gu
...
Stephanie Debats
Huisheng Wang
Hartwig Adam
Mikhail Sirotenko
Liang-Chieh Chen
28
14
0
09 Nov 2023
Virtual Accessory Try-On via Keypoint Hallucination
Junhong Gou
Bo Zhang
Li Niu
Jianfu Zhang
Jianlou Si
Chen Qian
Liqing Zhang
19
1
0
26 Oct 2023
Multi-Similarity Contrastive Learning
Emily Mu
John Guttag
Maggie Makar
SSL
32
2
0
06 Jul 2023
InvPT++: Inverted Pyramid Multi-Task Transformer for Visual Scene Understanding
Hanrong Ye
Dan Xu
ViT
29
10
0
08 Jun 2023
MTLSegFormer: Multi-task Learning with Transformers for Semantic Segmentation in Precision Agriculture
D. Gonçalves
J. M. Junior
Pedro Zamboni
H. Pistori
Jonathan Li
Keiller Nogueira
W. Gonçalves
35
5
0
04 May 2023
Semantic Human Parsing via Scalable Semantic Transfer over Multiple Label Domains
Jie-jin Yang
Chaoqun Wang
Zhen Li
Junle Wang
Ruimao Zhang
19
15
0
09 Apr 2023
DeMT: Deformable Mixer Transformer for Multi-Task Learning of Dense Prediction
Yang Yang
Yibo Yang
L. Zhang
ViT
33
51
0
09 Jan 2023
PromptonomyViT: Multi-Task Prompt Learning Improves Video Transformers using Synthetic Scene Data
Roei Herzig
Ofir Abramovich
Elad Ben-Avraham
Assaf Arbelle
Leonid Karlinsky
Ariel Shamir
Trevor Darrell
Amir Globerson
41
16
0
08 Dec 2022
Are Transformers More Robust Than CNNs?
Yutong Bai
Jieru Mei
Alan Yuille
Cihang Xie
ViT
AAML
192
257
0
10 Nov 2021
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
207
1,367
0
06 Jun 2016
1