ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.01040
  4. Cited By
Hi-Transformer: Hierarchical Interactive Transformer for Efficient and
  Effective Long Document Modeling

Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling

2 June 2021
Chuhan Wu
Fangzhao Wu
Tao Qi
Yongfeng Huang
ArXivPDFHTML

Papers citing "Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling"

18 / 18 papers shown
Title
Enhancing Keyphrase Extraction from Academic Articles Using Section Structure Information
Enhancing Keyphrase Extraction from Academic Articles Using Section Structure Information
Chengzhi Zhang
Xinyi Yan
Lei Zhao
Yingyi Zhang
17
0
0
20 May 2025
ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models
ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models
Thibaut Thonet
Jos Rozen
Laurent Besacier
RALM
147
2
0
20 Jan 2025
HDT: Hierarchical Document Transformer
HDT: Hierarchical Document Transformer
Haoyu He
Markus Flicke
Jan Buchmann
Iryna Gurevych
Andreas Geiger
43
0
0
11 Jul 2024
DEPTH: Discourse Education through Pre-Training Hierarchically
DEPTH: Discourse Education through Pre-Training Hierarchically
Zachary Bamberger
Ofek Glick
Chaim Baskin
Yonatan Belinkov
67
0
0
13 May 2024
Collaborative-Enhanced Prediction of Spending on Newly Downloaded Mobile
  Games under Consumption Uncertainty
Collaborative-Enhanced Prediction of Spending on Newly Downloaded Mobile Games under Consumption Uncertainty
Peijie Sun
Yifan Wang
Min Zhang
Chuhan Wu
Yan Fang
Hong Zhu
Yuan Fang
Meng Wang
OffRL
38
10
0
12 Apr 2024
Fovea Transformer: Efficient Long-Context Modeling with Structured
  Fine-to-Coarse Attention
Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention
Ziwei He
Jian Yuan
Le Zhou
Jingwen Leng
Bo Jiang
35
1
0
13 Nov 2023
Attention over pre-trained Sentence Embeddings for Long Document
  Classification
Attention over pre-trained Sentence Embeddings for Long Document Classification
Amine Abdaoui
Sourav Dutta
27
1
0
18 Jul 2023
SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic
  Speech Processing
SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing
Weidong Chen
Xiaofen Xing
Xiangmin Xu
Jianxin Pang
Lan Du
30
38
0
27 Feb 2023
Finding the Law: Enhancing Statutory Article Retrieval via Graph Neural
  Networks
Finding the Law: Enhancing Statutory Article Retrieval via Graph Neural Networks
Antoine Louis
Gijs van Dijck
Gerasimos Spanakis
AILaw
23
9
0
30 Jan 2023
A Survey on Natural Language Processing for Programming
A Survey on Natural Language Processing for Programming
Qingfu Zhu
Xianzhen Luo
Fang Liu
Cuiyun Gao
Wanxiang Che
25
2
0
12 Dec 2022
R$^2$F: A General Retrieval, Reading and Fusion Framework for
  Document-level Natural Language Inference
R2^22F: A General Retrieval, Reading and Fusion Framework for Document-level Natural Language Inference
Hao Wang
Yixin Cao
Yangguang Li
Zhen Huang
Kun Wang
Jing Shao
FedML
30
0
0
22 Oct 2022
Transformer-based Entity Typing in Knowledge Graphs
Transformer-based Entity Typing in Knowledge Graphs
Zhiwei Hu
Víctor Gutiérrez-Basulto
Zhiliang Xiang
Ru Li
Jeff Z. Pan
23
18
0
20 Oct 2022
ConReader: Exploring Implicit Relations in Contracts for Contract Clause
  Extraction
ConReader: Exploring Implicit Relations in Contracts for Contract Clause Extraction
Weiwen Xu
Yang Deng
Wenqiang Lei
Wenlong Zhao
Tat-Seng Chua
W. Lam
AILaw
33
6
0
17 Oct 2022
An Exploration of Hierarchical Attention Transformers for Efficient Long
  Document Classification
An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification
Ilias Chalkidis
Xiang Dai
Manos Fergadiotis
Prodromos Malakasiotis
Desmond Elliott
44
34
0
11 Oct 2022
ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through
  Regularized Self-Attention
ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention
Yang Liu
Jiaxiang Liu
L. Chen
Yuxiang Lu
Shi Feng
Zhida Feng
Yu Sun
Hao Tian
Huancheng Wu
Hai-feng Wang
31
9
0
23 Mar 2022
Fastformer: Additive Attention Can Be All You Need
Fastformer: Additive Attention Can Be All You Need
Chuhan Wu
Fangzhao Wu
Tao Qi
Yongfeng Huang
Xing Xie
46
117
0
20 Aug 2021
A Survey of Transformers
A Survey of Transformers
Tianyang Lin
Yuxin Wang
Xiangyang Liu
Xipeng Qiu
ViT
53
1,089
0
08 Jun 2021
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
288
2,028
0
28 Jul 2020
1