ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.05790
  4. Cited By
Visual Parser: Representing Part-whole Hierarchies with Transformers
v1v2 (latest)

Visual Parser: Representing Part-whole Hierarchies with Transformers

13 July 2021
Shuyang Sun
Xiaoyu Yue
S. Bai
Philip Torr
ArXiv (abs)PDFHTMLGithub (121★)

Papers citing "Visual Parser: Representing Part-whole Hierarchies with Transformers"

13 / 13 papers shown
Title
Representing Part-Whole Hierarchies in Foundation Models by Learning
  Localizability, Composability, and Decomposability from Anatomy via
  Self-Supervision
Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision
M. Taher
Michael B. Gotway
Jianming Liang
MedIm
101
6
0
24 Apr 2024
Window Normalization: Enhancing Point Cloud Understanding by Unifying
  Inconsistent Point Densities
Window Normalization: Enhancing Point Cloud Understanding by Unifying Inconsistent Point Densities
Qi Wang
Shengge Shi
Jiahui Li
Wuming Jiang
Xiangde Zhang
124
9
0
05 Dec 2022
Distribution Aware Metrics for Conditional Natural Language Generation
Distribution Aware Metrics for Conditional Natural Language Generation
David M. Chan
Yiming Ni
David A. Ross
Sudheendra Vijayanarasimhan
Austin Myers
John F. Canny
77
4
0
15 Sep 2022
Vision Transformers: From Semantic Segmentation to Dense Prediction
Vision Transformers: From Semantic Segmentation to Dense Prediction
Li Zhang
Jiachen Lu
Sixiao Zheng
Xinxuan Zhao
Xiatian Zhu
Yanwei Fu
Tao Xiang
Jianfeng Feng
Philip H. S. Torr
ViT
99
8
0
19 Jul 2022
Improving Semantic Segmentation in Transformers using Hierarchical
  Inter-Level Attention
Improving Semantic Segmentation in Transformers using Hierarchical Inter-Level Attention
Gary Leung
Jun Gao
Fangyin Wei
Sanja Fidler
74
3
0
05 Jul 2022
Softmax-free Linear Transformers
Softmax-free Linear Transformers
Jiachen Lu
Junge Zhang
Xiatian Zhu
Jianfeng Feng
Tao Xiang
Li Zhang
ViT
54
8
0
05 Jul 2022
MixFormer: Mixing Features across Windows and Dimensions
MixFormer: Mixing Features across Windows and Dimensions
Qiang Chen
Qiman Wu
Jian Wang
Qinghao Hu
T. Hu
Errui Ding
Jian Cheng
Jingdong Wang
MDEViT
80
109
0
06 Apr 2022
ObjectFormer for Image Manipulation Detection and Localization
ObjectFormer for Image Manipulation Detection and Localization
Junke Wang
Zuxuan Wu
Jingjing Chen
Xintong Han
Abhinav Shrivastava
Ser-Nam Lim
Yu-Gang Jiang
95
115
0
28 Mar 2022
Stratified Transformer for 3D Point Cloud Segmentation
Stratified Transformer for 3D Point Cloud Segmentation
Xin Lai
Jianhui Liu
Li Jiang
Liwei Wang
Hengshuang Zhao
Shu Liu
Xiaojuan Qi
Jiaya Jia
3DPCViT
121
276
0
28 Mar 2022
Vision Transformer with Deformable Attention
Vision Transformer with Deformable Attention
Zhuofan Xia
Xuran Pan
S. Song
Li Erran Li
Gao Huang
ViT
117
490
0
03 Jan 2022
Joint Global and Local Hierarchical Priors for Learned Image Compression
Joint Global and Local Hierarchical Priors for Learned Image Compression
Jun-Hyuk Kim
Byeongho Heo
Jong-Seok Lee
107
85
0
08 Dec 2021
A Survey on Visual Transformer
A Survey on Visual Transformer
Kai Han
Yunhe Wang
Hanting Chen
Xinghao Chen
Jianyuan Guo
...
Chunjing Xu
Yixing Xu
Zhaohui Yang
Yiman Zhang
Dacheng Tao
ViT
229
2,276
0
23 Dec 2020
On the Relationship between Self-Attention and Convolutional Layers
On the Relationship between Self-Attention and Convolutional Layers
Jean-Baptiste Cordonnier
Andreas Loukas
Martin Jaggi
146
535
0
08 Nov 2019
1