Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.15156
Cited By
v1
v2 (latest)
xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data
26 November 2023
Jing Gong
Minsheng Hao
Xingyi Cheng
Xin Zeng
Chiming Liu
Jianzhu Ma
Xuegong Zhang
Taifeng Wang
Leo T. Song
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data"
11 / 11 papers shown
Title
Bidirectional Mamba for Single-Cell Data: Efficient Context Learning with Biological Fidelity
Cong Qi
Hanzhang Fang
Tianxing Hu
Siqi Jiang
Wei Zhi
Mamba
102
0
0
22 Apr 2025
Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Yi Tay
Mostafa Dehghani
Samira Abnar
Hyung Won Chung
W. Fedus
J. Rao
Sharan Narang
Vinh Q. Tran
Dani Yogatama
Donald Metzler
AI4CE
111
106
0
21 Jul 2022
On Embeddings for Numerical Features in Tabular Deep Learning
Yura Gorishniy
Ivan Rubachev
Artem Babenko
LMTD
105
178
0
10 Mar 2022
OntoProtein: Protein Pretraining With Gene Ontology Embedding
Ningyu Zhang
Zhen Bi
Xiaozhuan Liang
Shuyang Cheng
Haosen Hong
Shumin Deng
J. Lian
Qiang Zhang
Huajun Chen
162
97
0
23 Jan 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
477
7,827
0
11 Nov 2021
Modeling Protein Using Large-scale Pretrain Language Model
Yijia Xiao
J. Qiu
Ziang Li
Chang-Yu Hsieh
Jie Tang
84
30
0
17 Aug 2021
DeepDDS: deep graph neural network with attention mechanism to predict synergistic drug combinations
Jinxian Wang
Xuejun Liu
Siyuan Shen
L. Deng
Hui Liu
GNN
57
142
0
06 Jul 2021
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
151
858
0
14 Jun 2021
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
377
1,489
0
18 Mar 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
642
4,921
0
23 Jan 2020
Using Pre-Training Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Kimin Lee
Mantas Mazeika
NoLa
89
727
0
28 Jan 2019
1