Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.14287
Cited By
General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference
29 April 2020
Jingfei Du
Myle Ott
Haoran Li
Xing Zhou
Veselin Stoyanov
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference"
7 / 7 papers shown
Title
Robust Concept Erasure via Kernelized Rate-Distortion Maximization
Somnath Basu Roy Chowdhury
Nicholas Monath
Kumar Avinava Dubey
Amr Ahmed
Snigdha Chaturvedi
32
4
0
30 Nov 2023
Plug-and-Play Document Modules for Pre-trained Models
Chaojun Xiao
Zhengyan Zhang
Xu Han
Chi-Min Chan
Yankai Lin
Zhiyuan Liu
Xiangyang Li
Zhonghua Li
Bo Zhao
Maosong Sun
KELM
34
5
0
28 May 2023
Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes
Kuan-Hao Huang
L Tan
Rui Hou
Sinong Wang
Amjad Almahairi
Ruty Rinott
AI4CE
36
0
0
22 May 2023
State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes
Mario Giulianelli
Verna Dankers
Mikel Artetxe
Yanai Elazar
...
Leila Khalatbari
Maria Ryskina
Rita Frieske
Ryan Cotterell
Zhijing Jin
127
94
0
06 Oct 2022
AI and the Everything in the Whole Wide World Benchmark
Inioluwa Deborah Raji
Emily M. Bender
Amandalynne Paullada
Emily L. Denton
A. Hanna
30
291
0
26 Nov 2021
Question Answering Infused Pre-training of General-Purpose Contextualized Representations
Robin Jia
M. Lewis
Luke Zettlemoyer
23
28
0
15 Jun 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
1