ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.00479
  4. Cited By
DoT: An efficient Double Transformer for NLP tasks with tables

DoT: An efficient Double Transformer for NLP tasks with tables

1 June 2021
Syrine Krichene
Thomas Müller
Julian Martin Eisenschlos
ArXivPDFHTML

Papers citing "DoT: An efficient Double Transformer for NLP tasks with tables"

3 / 3 papers shown
Title
Table Retrieval May Not Necessitate Table-specific Model Design
Table Retrieval May Not Necessitate Table-specific Model Design
Zhiruo Wang
Zhengbao Jiang
Eric Nyberg
Graham Neubig
LMTD
31
16
0
19 May 2022
Table Pre-training: A Survey on Model Architectures, Pre-training
  Objectives, and Downstream Tasks
Table Pre-training: A Survey on Model Architectures, Pre-training Objectives, and Downstream Tasks
Haoyu Dong
Zhoujun Cheng
Xinyi He
Mengyuan Zhou
Anda Zhou
Fan Zhou
Ao Liu
Shi Han
Dongmei Zhang
LMTD
65
64
0
24 Jan 2022
MATE: Multi-view Attention for Table Transformer Efficiency
MATE: Multi-view Attention for Table Transformer Efficiency
Julian Martin Eisenschlos
Maharshi Gor
Thomas Müller
William W. Cohen
LMTD
75
95
0
09 Sep 2021
1