ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.10334
  4. Cited By
Structure Inducing Pre-Training

Structure Inducing Pre-Training

18 March 2021
Matthew B. A. McDermott
Brendan Yap
Peter Szolovits
Marinka Zitnik
ArXivPDFHTML

Papers citing "Structure Inducing Pre-Training"

10 / 10 papers shown
Title
Event Stream GPT: A Data Pre-processing and Modeling Library for
  Generative, Pre-trained Transformers over Continuous-time Sequences of
  Complex Events
Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Matthew B. A. McDermott
Bret A. Nestor
Peniel Argaw
I. Kohane
AI4TS
24
21
0
20 Jun 2023
A 3D-Shape Similarity-based Contrastive Approach to Molecular
  Representation Learning
A 3D-Shape Similarity-based Contrastive Approach to Molecular Representation Learning
Austin O. Atsango
N. Diamant
Ziqing Lu
Tommaso Biancalani
Gabriele Scalia
Kangway V Chuang
27
2
0
03 Nov 2022
Self-Supervised Contrastive Pre-Training For Time Series via
  Time-Frequency Consistency
Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency
Xiang Zhang
Ziyuan Zhao
Theodoros Tsiligkaridis
Marinka Zitnik
AI4TS
25
272
0
17 Jun 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,657
0
15 Oct 2021
Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Dict-BERT: Enhancing Language Model Pre-training with Dictionary
W. Yu
Chenguang Zhu
Yuwei Fang
Donghan Yu
Shuohang Wang
Yichong Xu
Michael Zeng
Meng Jiang
50
64
0
13 Oct 2021
COCO-LM: Correcting and Contrasting Text Sequences for Language Model
  Pretraining
COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Yu Meng
Chenyan Xiong
Payal Bajaj
Saurabh Tiwary
Paul N. Bennett
Jiawei Han
Xia Song
125
202
0
16 Feb 2021
ERICA: Improving Entity and Relation Understanding for Pre-trained
  Language Models via Contrastive Learning
ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning
Yujia Qin
Yankai Lin
Ryuichi Takanobu
Zhiyuan Liu
Peng Li
Heng Ji
Minlie Huang
Maosong Sun
Jie Zhou
52
125
0
30 Dec 2020
A Mutual Information Maximization Perspective of Language Representation
  Learning
A Mutual Information Maximization Perspective of Language Representation Learning
Lingpeng Kong
Cyprien de Masson dÁutume
Wang Ling
Lei Yu
Zihang Dai
Dani Yogatama
SSL
214
165
0
18 Oct 2019
K-BERT: Enabling Language Representation with Knowledge Graph
K-BERT: Enabling Language Representation with Knowledge Graph
Weijie Liu
Peng Zhou
Zhe Zhao
Zhiruo Wang
Qi Ju
Haotang Deng
Ping Wang
231
778
0
17 Sep 2019
Knowledge Enhanced Contextual Word Representations
Knowledge Enhanced Contextual Word Representations
Matthew E. Peters
Mark Neumann
IV RobertL.Logan
Roy Schwartz
Vidur Joshi
Sameer Singh
Noah A. Smith
231
656
0
09 Sep 2019
1