ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.01015
  4. Cited By
Semi-Structured Object Sequence Encoders

Semi-Structured Object Sequence Encoders

3 January 2023
V. Rudramurthy
Riyaz Ahmad Bhat
Chulaka Gunasekara
Siva Sankalp Patel
H. Wan
Tejas I. Dhamecha
Danish Contractor
Marina Danilevsky
ArXivPDFHTML

Papers citing "Semi-Structured Object Sequence Encoders"

37 / 37 papers shown
Title
One Transformer for All Time Series: Representing and Training with Time-Dependent Heterogeneous Tabular Data
One Transformer for All Time Series: Representing and Training with Time-Dependent Heterogeneous Tabular Data
Simone Luetto
Fabrizio Garuti
E. Sangineto
L. Forni
Rita Cucchiara
LMTD
AI4TS
107
12
0
13 Feb 2023
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
376
2,382
0
09 Nov 2022
MultiHiertt: Numerical Reasoning over Multi Hierarchical Tabular and
  Textual Data
MultiHiertt: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data
Yilun Zhao
Yunxiang Li
Chenying Li
Rui Zhang
AIMat
50
102
0
03 Jun 2022
Sequence-to-Sequence Knowledge Graph Completion and Question Answering
Sequence-to-Sequence Knowledge Graph Completion and Question Answering
Apoorv Saxena
Adrian Kochsiek
Rainer Gemulla
AIMat
101
128
0
19 Mar 2022
TableFormer: Robust Transformer Modeling for Table-Text Encoding
TableFormer: Robust Transformer Modeling for Table-Text Encoding
Jingfeng Yang
Aditya Gupta
Shyam Upadhyay
Luheng He
Rahul Goel
Shachi Paul
LMTD
65
115
0
01 Mar 2022
The GatedTabTransformer. An enhanced deep learning architecture for
  tabular modeling
The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling
Radostin Cholakov
T. Kolev
LMTD
39
13
0
01 Jan 2022
High-Resolution Image Synthesis with Latent Diffusion Models
High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach
A. Blattmann
Dominik Lorenz
Patrick Esser
Bjorn Ommer
3DV
403
15,486
0
20 Dec 2021
Anomaly Transformer: Time Series Anomaly Detection with Association
  Discrepancy
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy
Jiehui Xu
Haixu Wu
Jianmin Wang
Mingsheng Long
AI4TS
63
497
0
06 Oct 2021
Long-Range Transformers for Dynamic Spatiotemporal Forecasting
Long-Range Transformers for Dynamic Spatiotemporal Forecasting
J. E. Grigsby
Zhe Wang
Nam Nguyen
Yanjun Qi
AI4TS
77
92
0
24 Sep 2021
FinQA: A Dataset of Numerical Reasoning over Financial Data
FinQA: A Dataset of Numerical Reasoning over Financial Data
Zhiyu Chen
Wenhu Chen
Charese Smiley
Sameena Shah
Iana Borova
...
Reema N Moussa
Matthew I. Beane
Ting-Hao 'Kenneth' Huang
Bryan R. Routledge
Wenjie Wang
AIMat
107
338
0
01 Sep 2021
Dual Reader-Parser on Hybrid Textual and Tabular Evidence for Open
  Domain Question Answering
Dual Reader-Parser on Hybrid Textual and Tabular Evidence for Open Domain Question Answering
Alexander Hanbo Li
Patrick Ng
Peng Xu
Henghui Zhu
Zhiguo Wang
Bing Xiang
LMTD
154
32
0
05 Aug 2021
Autoformer: Decomposition Transformers with Auto-Correlation for
  Long-Term Series Forecasting
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting
Haixu Wu
Jiehui Xu
Jianmin Wang
Mingsheng Long
AI4TS
101
2,245
0
24 Jun 2021
Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and
  Interpretable Visual Understanding
Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding
Zizhao Zhang
Han Zhang
Long Zhao
Ting Chen
Sercan O. Arik
Tomas Pfister
ViT
52
173
0
26 May 2021
TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and
  Textual Content in Finance
TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance
Fengbin Zhu
Wenqiang Lei
Youcheng Huang
Chao Wang
Shuo Zhang
Jiancheng Lv
Fuli Feng
Tat-Seng Chua
AIMat
104
293
0
17 May 2021
TABBIE: Pretrained Representations of Tabular Data
TABBIE: Pretrained Representations of Tabular Data
H. Iida
Dung Ngoc Thai
Varun Manjunatha
Mohit Iyyer
LMTD
SSL
VLM
65
175
0
06 May 2021
Retrieving Complex Tables with Multi-Granular Graph Representation
  Learning
Retrieving Complex Tables with Multi-Granular Graph Representation Learning
Fei Wang
Kexuan Sun
Muhao Chen
Jay Pujara
Pedro A. Szekely
LMTD
40
44
0
04 May 2021
RetaGNN: Relational Temporal Attentive Graph Neural Networks for
  Holistic Sequential Recommendation
RetaGNN: Relational Temporal Attentive Graph Neural Networks for Holistic Sequential Recommendation
Cheng-Mao Hsu
Cheng-Te Li
42
72
0
29 Jan 2021
Representations for Question Answering from Documents with Tables and
  Text
Representations for Question Answering from Documents with Tables and Text
Vicky Zayats
Kristina Toutanova
Mari Ostendorf
LMTD
58
37
0
26 Jan 2021
DialogBERT: Discourse-Aware Response Generation via Learning to Recover
  and Rank Utterances
DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances
X. Gu
Kang Min Yoo
Jung-Woo Ha
47
73
0
03 Dec 2020
Tabular Transformers for Modeling Multivariate Time Series
Tabular Transformers for Modeling Multivariate Time Series
Inkit Padhi
Yair Schiff
Igor Melnyk
Mattia Rigotti
Youssef Mroueh
Pierre Dognin
Jerret Ross
Ravi Nair
Erik Altman
LMTD
AI4TS
51
92
0
03 Nov 2020
A Transformer-based Framework for Multivariate Time Series
  Representation Learning
A Transformer-based Framework for Multivariate Time Series Representation Learning
George Zerveas
Srideepika Jayaraman
Dhaval Patel
A. Bhamidipaty
Carsten Eickhoff
AI4TS
78
918
0
06 Oct 2020
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
538
2,081
0
28 Jul 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
743
41,894
0
28 May 2020
Scheduled DropHead: A Regularization Method for Transformer Models
Scheduled DropHead: A Regularization Method for Transformer Models
Wangchunshu Zhou
Tao Ge
Ke Xu
Furu Wei
Ming Zhou
47
36
0
28 Apr 2020
HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and
  Textual Data
HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data
Wenhu Chen
Hanwen Zha
Zhiyu Zoey Chen
Wenhan Xiong
Hong Wang
Wenjie Wang
61
302
0
15 Apr 2020
Longformer: The Long-Document Transformer
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
155
4,061
0
10 Apr 2020
AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
Ximeng Sun
Yikang Shen
Rogerio Feris
Kate Saenko
69
266
0
27 Nov 2019
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMat
VLM
246
10,819
0
29 Oct 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
227
8,424
0
19 Jun 2019
HIBERT: Document Level Pre-training of Hierarchical Bidirectional
  Transformers for Document Summarization
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
Xingxing Zhang
Furu Wei
M. Zhou
75
379
0
16 May 2019
Attentive Single-Tasking of Multiple Tasks
Attentive Single-Tasking of Multiple Tasks
Kevis-Kokitsi Maninis
Ilija Radosavovic
Iasonas Kokkinos
170
249
0
18 Apr 2019
Branched Multi-Task Networks: Deciding What Layers To Share
Branched Multi-Task Networks: Deciding What Layers To Share
Simon Vandenhende
Stamatios Georgoulis
Bert De Brabandere
Luc Van Gool
73
145
0
05 Apr 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.7K
94,770
0
11 Oct 2018
End-to-End Multi-Task Learning with Attention
End-to-End Multi-Task Learning with Attention
Shikun Liu
Edward Johns
Andrew J. Davison
CVBM
53
1,052
0
28 Mar 2018
Graph Attention Networks
Graph Attention Networks
Petar Velickovic
Guillem Cucurull
Arantxa Casanova
Adriana Romero
Pietro Lio
Yoshua Bengio
GNN
468
20,124
0
30 Oct 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
687
131,526
0
12 Jun 2017
Fully-adaptive Feature Sharing in Multi-Task Networks with Applications
  in Person Attribute Classification
Fully-adaptive Feature Sharing in Multi-Task Networks with Applications in Person Attribute Classification
Y. Lu
Abhishek Kumar
Shuangfei Zhai
Yu Cheng
T. Javidi
Rogerio Feris
3DH
52
387
0
16 Nov 2016
1