ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.09113
  4. Cited By
Star-Transformer

Star-Transformer

25 February 2019
Qipeng Guo
Xipeng Qiu
Pengfei Liu
Yunfan Shao
Xiangyang Xue
Zheng-Wei Zhang
ArXivPDFHTML

Papers citing "Star-Transformer"

34 / 34 papers shown
Title
Snuffy: Efficient Whole Slide Image Classifier
Snuffy: Efficient Whole Slide Image Classifier
Hossein Jafarinia
Alireza Alipanah
Danial Hamdi
Saeed Razavi
Nahal Mirzaie
M. Rohban
3DH
48
1
0
15 Aug 2024
SoK: Leveraging Transformers for Malware Analysis
SoK: Leveraging Transformers for Malware Analysis
Pradip Kunwar
Kshitiz Aryal
Maanak Gupta
Mahmoud Abdelsalam
Elisa Bertino
90
0
0
27 May 2024
Automated Fusion of Multimodal Electronic Health Records for Better
  Medical Predictions
Automated Fusion of Multimodal Electronic Health Records for Better Medical Predictions
Suhan Cui
Jiaqi Wang
Yuan Zhong
Han Liu
Ting Wang
Fenglong Ma
30
2
0
20 Jan 2024
BLSTM-Based Confidence Estimation for End-to-End Speech Recognition
BLSTM-Based Confidence Estimation for End-to-End Speech Recognition
A. Ogawa
Naohiro Tawara
Takatomo Kano
Marc Delcroix
38
4
0
22 Dec 2023
FIT: Far-reaching Interleaved Transformers
FIT: Far-reaching Interleaved Transformers
Ting-Li Chen
Lala Li
21
12
0
22 May 2023
Escaping the sentence-level paradigm in machine translation
Escaping the sentence-level paradigm in machine translation
Matt Post
Marcin Junczys-Dowmunt
26
26
0
25 Apr 2023
Scaling Transformer to 1M tokens and beyond with RMT
Scaling Transformer to 1M tokens and beyond with RMT
Aydar Bulatov
Yuri Kuratov
Yermek Kapushev
Mikhail Burtsev
LRM
14
87
0
19 Apr 2023
Local spectral attention for full-band speech enhancement
Local spectral attention for full-band speech enhancement
Zhongshu Hou
Qi Hu
Kai-Jyun Chen
Jing Lu
28
0
0
11 Feb 2023
How to choose "Good" Samples for Text Data Augmentation
How to choose "Good" Samples for Text Data Augmentation
Xiaotian Lin
Nankai Lin
Yingwen Fu
Ziyu Yang
Shengyi Jiang
36
2
0
02 Feb 2023
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
Jinchao Zhang
Shuyang Jiang
Jiangtao Feng
Lin Zheng
Lingpeng Kong
3DV
41
9
0
14 Oct 2022
Searching a High-Performance Feature Extractor for Text Recognition
  Network
Searching a High-Performance Feature Extractor for Text Recognition Network
Hui Zhang
Quanming Yao
James T. Kwok
X. Bai
28
7
0
27 Sep 2022
NUWA-Infinity: Autoregressive over Autoregressive Generation for
  Infinite Visual Synthesis
NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis
Chenfei Wu
Jian Liang
Xiaowei Hu
Zhe Gan
Jianfeng Wang
Lijuan Wang
Zicheng Liu
Yuejian Fang
Nan Duan
VGen
15
72
0
20 Jul 2022
KARL-Trans-NER: Knowledge Aware Representation Learning for Named Entity
  Recognition using Transformers
KARL-Trans-NER: Knowledge Aware Representation Learning for Named Entity Recognition using Transformers
Avi Chawla
Nidhi Mulay
Vikas Bishnoi
Gaurav Dhama
ViT
14
2
0
30 Nov 2021
MotifClass: Weakly Supervised Text Classification with Higher-order
  Metadata Information
MotifClass: Weakly Supervised Text Classification with Higher-order Metadata Information
Yu Zhang
Shweta Garg
Yu Meng
Xiusi Chen
Jiawei Han
37
16
0
07 Nov 2021
StoryDB: Broad Multi-language Narrative Dataset
StoryDB: Broad Multi-language Narrative Dataset
Alexey Tikhonov
Igor Samenko
Ivan P. Yamshchikov
38
5
0
29 Sep 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
37
813
0
14 Jun 2021
A Survey of Transformers
A Survey of Transformers
Tianyang Lin
Yuxin Wang
Xiangyang Liu
Xipeng Qiu
ViT
32
1,086
0
08 Jun 2021
Poolingformer: Long Document Modeling with Pooling Attention
Poolingformer: Long Document Modeling with Pooling Attention
Hang Zhang
Yeyun Gong
Yelong Shen
Weisheng Li
Jiancheng Lv
Nan Duan
Weizhu Chen
35
98
0
10 May 2021
Code Structure Guided Transformer for Source Code Summarization
Code Structure Guided Transformer for Source Code Summarization
Shuzheng Gao
Cuiyun Gao
Yulan He
Jichuan Zeng
L. Nie
Xin Xia
Michael R. Lyu
17
96
0
19 Apr 2021
Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Jiangang Bai
Yujing Wang
Yiren Chen
Yaming Yang
Jing Bai
J. Yu
Yunhai Tong
39
104
0
07 Mar 2021
SparseBERT: Rethinking the Importance Analysis in Self-attention
SparseBERT: Rethinking the Importance Analysis in Self-attention
Han Shi
Jiahui Gao
Xiaozhe Ren
Hang Xu
Xiaodan Liang
Zhenguo Li
James T. Kwok
21
54
0
25 Feb 2021
Match-Ignition: Plugging PageRank into Transformer for Long-form Text
  Matching
Match-Ignition: Plugging PageRank into Transformer for Long-form Text Matching
Liang Pang
Yanyan Lan
Xueqi Cheng
16
19
0
16 Jan 2021
A Survey on Recent Advances in Sequence Labeling from Deep Learning
  Models
A Survey on Recent Advances in Sequence Labeling from Deep Learning Models
Zhiyong He
Zanbo Wang
Wei Wei
Shanshan Feng
Xian-Ling Mao
Sheng Jiang
VLM
28
28
0
13 Nov 2020
SMYRF: Efficient Attention using Asymmetric Clustering
SMYRF: Efficient Attention using Asymmetric Clustering
Giannis Daras
Nikita Kitaev
Augustus Odena
A. Dimakis
23
44
0
11 Oct 2020
Efficient Transformers: A Survey
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
74
1,101
0
14 Sep 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,450
0
18 Mar 2020
A Survey of Deep Learning Techniques for Neural Machine Translation
A Survey of Deep Learning Techniques for Neural Machine Translation
Shu Yang
Yuxin Wang
X. Chu
VLM
AI4TS
AI4CE
17
138
0
18 Feb 2020
Residual Attention Net for Superior Cross-Domain Time Sequence Modeling
Residual Attention Net for Superior Cross-Domain Time Sequence Modeling
Seth H. Huang
Lingjie Xu
Congwei Jiang
AI4TS
21
10
0
13 Jan 2020
Linking Social Media Posts to News with Siamese Transformers
Linking Social Media Posts to News with Siamese Transformers
Jacob Danovitch
16
2
0
10 Jan 2020
Hierarchical Contextualized Representation for Named Entity Recognition
Hierarchical Contextualized Representation for Named Entity Recognition
Ying Luo
Fengshun Xiao
Zhao Hai
22
129
0
06 Nov 2019
Transformers without Tears: Improving the Normalization of
  Self-Attention
Transformers without Tears: Improving the Normalization of Self-Attention
Toan Q. Nguyen
Julian Salazar
36
224
0
14 Oct 2019
Automatically Extracting Challenge Sets for Non local Phenomena in
  Neural Machine Translation
Automatically Extracting Challenge Sets for Non local Phenomena in Neural Machine Translation
Leshem Choshen
Omri Abend
14
18
0
15 Sep 2019
Graph Star Net for Generalized Multi-Task Learning
Graph Star Net for Generalized Multi-Task Learning
H. Lu
Seth H. Huang
Tian Ye
Xiuyan Guo
GNN
27
46
0
21 Jun 2019
Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
255
13,364
0
25 Aug 2014
1