ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.04650
  4. Cited By
TED-net: Convolution-free T2T Vision Transformer-based Encoder-decoder
  Dilation network for Low-dose CT Denoising

TED-net: Convolution-free T2T Vision Transformer-based Encoder-decoder Dilation network for Low-dose CT Denoising

8 June 2021
Dayang Wang
Zhan Wu
Hengyong Yu
    ViT
    MedIm
ArXivPDFHTML

Papers citing "TED-net: Convolution-free T2T Vision Transformer-based Encoder-decoder Dilation network for Low-dose CT Denoising"

19 / 19 papers shown
Title
Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation
Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation
Hu Cao
Yueyue Wang
Jieneng Chen
Dongsheng Jiang
Xiaopeng Zhang
Qi Tian
Manning Wang
ViT
MedIm
82
2,879
0
12 May 2021
HOTR: End-to-End Human-Object Interaction Detection with Transformers
HOTR: End-to-End Human-Object Interaction Detection with Transformers
Bumsoo Kim
Junhyun Lee
Jaewoo Kang
Eun-Sol Kim
Hyunwoo J. Kim
ViT
72
255
0
28 Apr 2021
CvT: Introducing Convolutions to Vision Transformers
CvT: Introducing Convolutions to Vision Transformers
Haiping Wu
Bin Xiao
Noel Codella
Mengchen Liu
Xiyang Dai
Lu Yuan
Lei Zhang
ViT
128
1,901
0
29 Mar 2021
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
398
21,281
0
25 Mar 2021
Transformer in Transformer
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
373
1,556
0
27 Feb 2021
Tokens-to-Token ViT: Training Vision Transformers from Scratch on
  ImageNet
Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Li-xin Yuan
Yunpeng Chen
Tao Wang
Weihao Yu
Yujun Shi
Zihang Jiang
Francis E. H. Tay
Jiashi Feng
Shuicheng Yan
ViT
117
1,931
0
28 Jan 2021
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
341
6,731
0
23 Dec 2020
Pre-Trained Image Processing Transformer
Pre-Trained Image Processing Transformer
Hanting Chen
Yunhe Wang
Tianyu Guo
Chang Xu
Yiping Deng
Zhenhua Liu
Siwei Ma
Chunjing Xu
Chao Xu
Wen Gao
VLM
ViT
127
1,671
0
01 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
526
40,739
0
22 Oct 2020
Rethinking Attention with Performers
Rethinking Attention with Performers
K. Choromanski
Valerii Likhosherstov
David Dohan
Xingyou Song
Andreea Gane
...
Afroz Mohiuddin
Lukasz Kaiser
David Belanger
Lucy J. Colwell
Adrian Weller
165
1,570
0
30 Sep 2020
Learning Texture Transformer Network for Image Super-Resolution
Learning Texture Transformer Network for Image Super-Resolution
Fuzhi Yang
Huan Yang
Jianlong Fu
Hongtao Lu
B. Guo
SupR
ViT
72
720
0
07 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
596
41,736
0
28 May 2020
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
Ji Xin
Raphael Tang
Jaejun Lee
Yaoliang Yu
Jimmy J. Lin
53
372
0
27 Apr 2020
DialoGPT: Large-Scale Generative Pre-training for Conversational
  Response Generation
DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
Yizhe Zhang
Siqi Sun
Michel Galley
Yen-Chun Chen
Chris Brockett
Xiang Gao
Jianfeng Gao
Jingjing Liu
W. Dolan
VLM
153
1,519
0
01 Nov 2019
Quadratic Autoencoder (Q-AE) for Low-dose CT Denoising
Quadratic Autoencoder (Q-AE) for Low-dose CT Denoising
Fenglei Fan
Hongming Shan
Mannudeep K. Kalra
Ramandeep Singh
Guhan Qian
M. Getzin
Yueyang Teng
Juergen Hahn
Ge Wang
65
110
0
17 Jan 2019
Can Deep Learning Outperform Modern Commercial CT Image Reconstruction
  Methods?
Can Deep Learning Outperform Modern Commercial CT Image Reconstruction Methods?
Deepak Mishra
Atul Padole
AP Prathosh
Aravind Jayendran
R. Khera
Varun Srivastava
S. Chaudhury
Ge Wang
OOD
3DV
65
325
0
08 Nov 2018
Low Dose CT Image Denoising Using a Generative Adversarial Network with
  Wasserstein Distance and Perceptual Loss
Low Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss
Qingsong Yang
Pingkun Yan
Yanbo Zhang
Hengyong Yu
Yongyi Shi
X. Mou
Mannudeep K. Kalra
Ge Wang
GAN
MedIm
61
1,197
0
03 Aug 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
624
130,942
0
12 Jun 2017
Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network
  (RED-CNN)
Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN)
Hu Chen
Yi Zhang
Mannudeep K. Kalra
Feng Lin
Yang Chen
Peixi Liao
Jiliu Zhou
Ge Wang
MedIm
106
1,306
0
01 Feb 2017
1