ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00364
  4. Cited By
Pre-Trained Image Processing Transformer

Pre-Trained Image Processing Transformer

1 December 2020
Hanting Chen
Yunhe Wang
Tianyu Guo
Chang Xu
Yiping Deng
Zhenhua Liu
Siwei Ma
Chunjing Xu
Chao Xu
Wen Gao
    VLM
    ViT
ArXivPDFHTML

Papers citing "Pre-Trained Image Processing Transformer"

50 / 290 papers shown
Title
Heuristic-free Optimization of Force-Controlled Robot Search Strategies
  in Stochastic Environments
Heuristic-free Optimization of Force-Controlled Robot Search Strategies in Stochastic Environments
Bastian Alt
Darko Katic
Rainer Jäkel
Michael Beetz
23
6
0
15 Jul 2022
Learning Parallax Transformer Network for Stereo Image JPEG Artifacts
  Removal
Learning Parallax Transformer Network for Stereo Image JPEG Artifacts Removal
Xuhao Jiang
Weimin Tan
Ri Cheng
Shili Zhou
Bo Yan
ViT
19
6
0
15 Jul 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
57
95
0
04 Jul 2022
Polarized Color Image Denoising using Pocoformer
Zhuoxiao Li
Hai-bo Jiang
Yinqiang Zheng
29
3
0
01 Jul 2022
Faster Diffusion Cardiac MRI with Deep Learning-based breath hold
  reduction
Faster Diffusion Cardiac MRI with Deep Learning-based breath hold reduction
Michael Tanzer
Pedro F. Ferreira
Andrew D. Scott
Z. Khalique
Maria Dwornik
D. Pennell
Guang Yang
Daniel Rueckert
S. Nielles-Vallespin
MedIm
17
3
0
21 Jun 2022
EATFormer: Improving Vision Transformer Inspired by Evolutionary
  Algorithm
EATFormer: Improving Vision Transformer Inspired by Evolutionary Algorithm
Jiangning Zhang
Xiangtai Li
Yabiao Wang
Chengjie Wang
Yibo Yang
Yong Liu
Dacheng Tao
ViT
34
32
0
19 Jun 2022
Multimodal Learning with Transformers: A Survey
Multimodal Learning with Transformers: A Survey
P. Xu
Xiatian Zhu
David A. Clifton
ViT
72
528
0
13 Jun 2022
Toward Real-world Single Image Deraining: A New Benchmark and Beyond
Toward Real-world Single Image Deraining: A New Benchmark and Beyond
Wei Li
Qiming Zhang
Jing Zhang
Zhen Huang
Xinmei Tian
Dacheng Tao
36
21
0
11 Jun 2022
Recurrent Video Restoration Transformer with Guided Deformable Attention
Recurrent Video Restoration Transformer with Guided Deformable Attention
Christos Sakaridis
Yuchen Fan
Xiaoyu Xiang
Rakesh Ranjan
Eddy Ilg
Simon Green
Jingyun Liang
Kaicheng Zhang
Radu Timofte
Luc Van Gool
42
152
0
05 Jun 2022
Vision GNN: An Image is Worth Graph of Nodes
Vision GNN: An Image is Worth Graph of Nodes
Kai Han
Yunhe Wang
Jianyuan Guo
Yehui Tang
Enhua Wu
GNN
3DH
15
352
0
01 Jun 2022
Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral
  Compressive Imaging
Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging
Yuanhao Cai
Jing Lin
Haoqian Wang
Xin Yuan
Henghui Ding
Yulun Zhang
Radu Timofte
Luc Van Gool
80
116
0
20 May 2022
MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
  with Multi-Stage Fusion
MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer with Multi-Stage Fusion
Jing Wang
Haotian Fa
X. Hou
Yitian Xu
Tao Li
X. Lu
Lean Fu
43
21
0
20 May 2022
MulT: An End-to-End Multitask Learning Transformer
MulT: An End-to-End Multitask Learning Transformer
Deblina Bhattacharjee
Tong Zhang
Sabine Süsstrunk
Mathieu Salzmann
ViT
39
63
0
17 May 2022
Dense residual Transformer for image denoising
Dense residual Transformer for image denoising
Chao Yao
Shuo Jin
Meiqin Liu
Xiaojuan Ban
ViT
39
29
0
14 May 2022
Activating More Pixels in Image Super-Resolution Transformer
Activating More Pixels in Image Super-Resolution Transformer
Xiangyu Chen
Xintao Wang
Jiantao Zhou
Yu Qiao
Chao Dong
ViT
79
602
0
09 May 2022
Coarse-to-Fine Video Denoising with Dual-Stage Spatial-Channel
  Transformer
Coarse-to-Fine Video Denoising with Dual-Stage Spatial-Channel Transformer
Wu Yun
Mengshi Qi
Chuanming Wang
Huiyuan Fu
Huadong Ma
ViT
13
6
0
30 Apr 2022
Where in the World is this Image? Transformer-based Geo-localization in
  the Wild
Where in the World is this Image? Transformer-based Geo-localization in the Wild
Shraman Pramanick
E. Nowara
Joshua Gleason
Carlos D. Castillo
Rama Chellappa
ViT
21
30
0
29 Apr 2022
One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer
  for Missing Data Imputation
One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer for Missing Data Imputation
Jiang Liu
Srivathsa Pasumarthi
B. Duffy
Enhao Gong
Keshav Datta
Greg Zaharchuk
ViT
MedIm
19
55
0
28 Apr 2022
Lightweight Bimodal Network for Single-Image Super-Resolution via
  Symmetric CNN and Recursive Transformer
Lightweight Bimodal Network for Single-Image Super-Resolution via Symmetric CNN and Recursive Transformer
Guangwei Gao
Zihan Wang
Juncheng Li
Wenjie Li
Yi Yu
T. Zeng
SupR
35
93
0
28 Apr 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision
  Transformers
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
40
76
0
27 Apr 2022
A Multi-Head Convolutional Neural Network With Multi-path Attention
  improves Image Denoising
A Multi-Head Convolutional Neural Network With Multi-path Attention improves Image Denoising
Jiahong Zhang
Meijun Qu
Ye Wang
Lihong Cao
11
6
0
27 Apr 2022
Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
  Deblurring
Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring
Youjian Zhang
Chaoyue Wang
Dacheng Tao
21
4
0
26 Apr 2022
Fast and Memory-Efficient Network Towards Efficient Image
  Super-Resolution
Fast and Memory-Efficient Network Towards Efficient Image Super-Resolution
Zongcai Du
Ding Liu
Jie Liu
Jie Tang
Gangshan Wu
Lean Fu
SupR
32
54
0
18 Apr 2022
MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral
  Reconstruction
MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction
Yuanhao Cai
Jing Lin
Zudi Lin
Haoqian Wang
Yulun Zhang
Hanspeter Pfister
Radu Timofte
Luc Van Gool
19
171
0
17 Apr 2022
Simple Baselines for Image Restoration
Simple Baselines for Image Restoration
Liangyu Chen
Xiaojie Chu
Xinming Zhang
Jian Sun
53
835
0
10 Apr 2022
Stripformer: Strip Transformer for Fast Image Deblurring
Stripformer: Strip Transformer for Fast Image Deblurring
Fu-Jen Tsai
Yan-Tsung Peng
Yen-Yu Lin
Chung-Chi Tsai
Chia-Wen Lin
ViT
21
171
0
10 Apr 2022
Multi-Task Distributed Learning using Vision Transformer with Random
  Patch Permutation
Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Sangjoon Park
Jong Chul Ye
FedML
MedIm
42
19
0
07 Apr 2022
Improving Vision Transformers by Revisiting High-frequency Components
Improving Vision Transformers by Revisiting High-frequency Components
Jiawang Bai
Liuliang Yuan
Shutao Xia
Shuicheng Yan
Zhifeng Li
Wei Liu
ViT
16
90
0
03 Apr 2022
Rethinking Portrait Matting with Privacy Preserving
Rethinking Portrait Matting with Privacy Preserving
Sihan Ma
Jizhizi Li
Jing Zhang
He-jun Zhang
Dacheng Tao
24
23
0
31 Mar 2022
InstaFormer: Instance-Aware Image-to-Image Translation with Transformer
InstaFormer: Instance-Aware Image-to-Image Translation with Transformer
Soohyun Kim
Jongbeom Baek
Jihye Park
Gyeongnyeon Kim
Seung Wook Kim
ViT
39
47
0
30 Mar 2022
Fine-tuning Image Transformers using Learnable Memory
Fine-tuning Image Transformers using Learnable Memory
Mark Sandler
A. Zhmoginov
Max Vladymyrov
Andrew Jackson
ViT
29
47
0
29 Mar 2022
Brain-inspired Multilayer Perceptron with Spiking Neurons
Brain-inspired Multilayer Perceptron with Spiking Neurons
Wenshuo Li
Hanting Chen
Jianyuan Guo
Ziyang Zhang
Yunhe Wang
30
35
0
28 Mar 2022
RSTT: Real-time Spatial Temporal Transformer for Space-Time Video
  Super-Resolution
RSTT: Real-time Spatial Temporal Transformer for Space-Time Video Super-Resolution
Z. Geng
Luming Liang
Tianyu Ding
Ilya Zharkov
29
69
0
27 Mar 2022
Give Me Your Attention: Dot-Product Attention Considered Harmful for
  Adversarial Patch Robustness
Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Giulio Lovisotto
Nicole Finnie
Mauricio Muñoz
Chaithanya Kumar Mummadi
J. H. Metzen
AAML
ViT
30
32
0
25 Mar 2022
Meta-attention for ViT-backed Continual Learning
Meta-attention for ViT-backed Continual Learning
Mengqi Xue
Haofei Zhang
Mingli Song
Mingli Song
CLL
32
42
0
22 Mar 2022
HIPA: Hierarchical Patch Transformer for Single Image Super Resolution
HIPA: Hierarchical Patch Transformer for Single Image Super Resolution
Qing Cai
Yiming Qian
Jinxing Li
Junjie Lv
Yee-Hong Yang
Feng Wu
Dafan Zhang
25
28
0
19 Mar 2022
WegFormer: Transformers for Weakly Supervised Semantic Segmentation
WegFormer: Transformers for Weakly Supervised Semantic Segmentation
Chunmeng Liu
Enze Xie
Wenjia Wang
Wenhai Wang
Guangya Li
Ping Luo
ViT
24
6
0
16 Mar 2022
HUMUS-Net: Hybrid unrolled multi-scale network architecture for
  accelerated MRI reconstruction
HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction
Zalan Fabian
Berk Tinaz
Mahdi Soltanolkotabi
33
50
0
15 Mar 2022
InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene
  Understanding
InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene Understanding
Hanrong Ye
Dan Xu
ViT
21
84
0
15 Mar 2022
Deep Transformers Thirst for Comprehensive-Frequency Data
Deep Transformers Thirst for Comprehensive-Frequency Data
R. Xia
Chao Xue
Boyu Deng
Fang Wang
Jingchao Wang
ViT
25
0
0
14 Mar 2022
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Xiaohan Ding
Xinming Zhang
Yi Zhou
Jungong Han
Guiguang Ding
Jian Sun
VLM
49
528
0
13 Mar 2022
Efficient Long-Range Attention Network for Image Super-resolution
Efficient Long-Range Attention Network for Image Super-resolution
Xindong Zhang
Huiyu Zeng
Shi Guo
Lei Zhang
ViT
19
277
0
13 Mar 2022
The Principle of Diversity: Training Stronger Vision Transformers Calls
  for Reducing All Levels of Redundancy
The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Tianlong Chen
Zhenyu (Allen) Zhang
Yu Cheng
Ahmed Hassan Awadallah
Zhangyang Wang
ViT
41
37
0
12 Mar 2022
No Free Lunch Theorem for Security and Utility in Federated Learning
No Free Lunch Theorem for Security and Utility in Federated Learning
Xiaojin Zhang
Hanlin Gu
Lixin Fan
Kai Chen
Qiang Yang
FedML
16
64
0
11 Mar 2022
Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain
  Analysis: From Theory to Practice
Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice
Peihao Wang
Wenqing Zheng
Tianlong Chen
Zhangyang Wang
ViT
30
127
0
09 Mar 2022
Adaptive Cross-Layer Attention for Image Restoration
Adaptive Cross-Layer Attention for Image Restoration
Yancheng Wang
N. Xu
Yingzhen Yang
29
3
0
04 Mar 2022
Patch Similarity Aware Data-Free Quantization for Vision Transformers
Patch Similarity Aware Data-Free Quantization for Vision Transformers
Zhikai Li
Liping Ma
Mengjuan Chen
Junrui Xiao
Qingyi Gu
MQ
ViT
17
44
0
04 Mar 2022
Spatio-temporal Vision Transformer for Super-resolution Microscopy
Spatio-temporal Vision Transformer for Super-resolution Microscopy
Charles N Christensen
M. Lu
Edward N. Ward
Pietro Lio
C. Kaminski
27
8
0
28 Feb 2022
CTformer: Convolution-free Token2Token Dilated Vision Transformer for
  Low-dose CT Denoising
CTformer: Convolution-free Token2Token Dilated Vision Transformer for Low-dose CT Denoising
Dayang Wang
Fenglei Fan
Zhan Wu
R. Liu
Fei Wang
Hengyong Yu
ViT
MedIm
35
122
0
28 Feb 2022
Real-World Blind Super-Resolution via Feature Matching with Implicit
  High-Resolution Priors
Real-World Blind Super-Resolution via Feature Matching with Implicit High-Resolution Priors
Chaofeng Chen
Xinyu Shi
Yipeng Qin
Xiaoming Li
Xiaoguang Han
Taojiannan Yang
Shihui Guo
30
113
0
26 Feb 2022
Previous
123456
Next