Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.15527
Cited By
Pre-training Co-evolutionary Protein Representation via A Pairwise Masked Language Model
29 October 2021
Liang He
Shizhuo Zhang
Lijun Wu
Huanhuan Xia
Fusong Ju
He Zhang
Siyuan Liu
Yingce Xia
Jianwei Zhu
Pan Deng
Jia Zhang
Tao Qin
Tie-Yan Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pre-training Co-evolutionary Protein Representation via A Pairwise Masked Language Model"
14 / 14 papers shown
Title
Modeling Protein Using Large-scale Pretrain Language Model
Yijia Xiao
J. Qiu
Ziang Li
Chang-Yu Hsieh
Jie Tang
31
28
0
17 Aug 2021
Profile Prediction: An Alignment-Based Pre-Training Task for Protein Sequence Models
Pascal Sturmfels
Jesse Vig
Ali Madani
Nazneen Rajani
26
25
0
01 Dec 2020
ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Ahmed Elnaggar
M. Heinzinger
Christian Dallago
Ghalia Rehawi
Yu Wang
...
Tamas B. Fehér
Christoph Angerer
Martin Steinegger
D. Bhowmik
B. Rost
DRL
25
927
0
13 Jul 2020
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
Lav Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
36
289
0
26 Jun 2020
ProGen: Language Modeling for Protein Generation
Ali Madani
Bryan McCann
Nikhil Naik
N. Keskar
N. Anand
Raphael R. Eguchi
Po-Ssu Huang
R. Socher
34
276
0
08 Mar 2020
Pre-Training of Deep Bidirectional Protein Sequence Representations with Structural Information
Seonwoo Min
Seunghyun Park
Siwon Kim
Hyun-Soo Choi
Byunghan Lee
Sungroh Yoon
SSL
22
62
0
25 Nov 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
309
24,160
0
26 Jul 2019
Evaluating Protein Transfer Learning with TAPE
Roshan Rao
Nicholas Bhattacharya
Neil Thomas
Yan Duan
Xi Chen
John F. Canny
Pieter Abbeel
Yun S. Song
SSL
63
786
0
19 Jun 2019
Learning protein sequence embeddings using information from structure
Tristan Bepler
Bonnie Berger
42
286
0
22 Feb 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
575
93,936
0
11 Oct 2018
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
44
11,520
0
15 Feb 2018
Deep generative models of genetic variation capture mutation effects
Adam J. Riesselman
John Ingraham
D. Marks
DRL
BDL
33
23
0
18 Dec 2017
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
223
129,831
0
12 Jun 2017
Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
Sheng Wang
S. Sun
Zerui Li
Renyu Zhang
Jinbo Xu
26
818
0
02 Sep 2016
1