Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.05877
Cited By
A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence Embeddings
11 March 2022
Haochen Tan
Wei Shao
Han Wu
Ke Yang
Linqi Song
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence Embeddings"
6 / 6 papers shown
Title
A Comprehensive Survey of Sentence Representations: From the BERT Epoch to the ChatGPT Era and Beyond
Abhinav Ramesh Kashyap
Thang-Tung Nguyen
Viktor Schlegel
Stefan Winkler
See-Kiong Ng
Soujanya Poria
AI4TS
3DV
SSL
34
6
0
22 May 2023
Unsupervised Sentence Representation Learning with Frequency-induced Adversarial Tuning and Incomplete Sentence Filtering
Bing Wang
Ximing Li
Zhiyao Yang
Yuanyuan Guan
Jiayin Li
Sheng-sheng Wang
35
6
0
15 May 2023
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Yu Meng
Chenyan Xiong
Payal Bajaj
Saurabh Tiwary
Paul N. Bennett
Jiawei Han
Xia Song
125
203
0
16 Feb 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
243
1,924
0
31 Dec 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
419
2,588
0
03 Sep 2019
1