ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.14165
  4. Cited By
Language Models are Few-Shot Learners

Language Models are Few-Shot Learners

28 May 2020
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
Prafulla Dhariwal
Arvind Neelakantan
Pranav Shyam
Girish Sastry
Amanda Askell
Sandhini Agarwal
Ariel Herbert-Voss
Gretchen Krueger
T. Henighan
R. Child
Aditya A. Ramesh
Daniel M. Ziegler
Jeff Wu
Clemens Winter
Christopher Hesse
Mark Chen
Eric Sigler
Ma-teusz Litwin
Scott Gray
B. Chess
Jack Clark
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
    BDL
ArXivPDFHTML

Papers citing "Language Models are Few-Shot Learners"

50 / 11,497 papers shown
Title
SiT: Self-supervised vIsion Transformer
SiT: Self-supervised vIsion Transformer
Sara Atito Ali Ahmed
Muhammad Awais
J. Kittler
ViT
39
139
0
08 Apr 2021
Low-Regret Active learning
Low-Regret Active learning
Cenk Baykal
Lucas Liebenwein
Dan Feldman
Daniela Rus
UQCV
38
3
0
06 Apr 2021
Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot
  Classification Benchmark
Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark
Vincent Dumoulin
N. Houlsby
Utku Evci
Xiaohua Zhai
Ross Goroshin
Sylvain Gelly
Hugo Larochelle
38
26
0
06 Apr 2021
Enabling Inference Privacy with Adaptive Noise Injection
Enabling Inference Privacy with Adaptive Noise Injection
Sanjay Kariyappa
Ousmane Amadou Dia
Moinuddin K. Qureshi
26
5
0
06 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future
  Challenges
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
45
402
0
05 Apr 2021
What Will it Take to Fix Benchmarking in Natural Language Understanding?
What Will it Take to Fix Benchmarking in Natural Language Understanding?
Samuel R. Bowman
George E. Dahl
ELM
ALM
30
156
0
05 Apr 2021
Compressing Visual-linguistic Model via Knowledge Distillation
Compressing Visual-linguistic Model via Knowledge Distillation
Zhiyuan Fang
Jianfeng Wang
Xiaowei Hu
Lijuan Wang
Yezhou Yang
Zicheng Liu
VLM
39
97
0
05 Apr 2021
Efficient Transformers in Reinforcement Learning using Actor-Learner
  Distillation
Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation
Emilio Parisotto
Ruslan Salakhutdinov
42
44
0
04 Apr 2021
Recommending Metamodel Concepts during Modeling Activities with
  Pre-Trained Language Models
Recommending Metamodel Concepts during Modeling Activities with Pre-Trained Language Models
Martin Weyssow
H. Sahraoui
Eugene Syriani
16
50
0
04 Apr 2021
Deepfake Detection Scheme Based on Vision Transformer and Distillation
Deepfake Detection Scheme Based on Vision Transformer and Distillation
Young-Jin Heo
Y. Choi
Young-Woon Lee
Byung-Gyu Kim
ViT
17
55
0
03 Apr 2021
Towards General Purpose Vision Systems
Towards General Purpose Vision Systems
Tanmay Gupta
Amita Kamath
Aniruddha Kembhavi
Derek Hoiem
11
50
0
01 Apr 2021
Going deeper with Image Transformers
Going deeper with Image Transformers
Hugo Touvron
Matthieu Cord
Alexandre Sablayrolles
Gabriel Synnaeve
Hervé Jégou
ViT
27
988
0
31 Mar 2021
Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
  Videos
Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos
Annie S. Chen
Suraj Nair
Chelsea Finn
38
137
0
31 Mar 2021
BASE Layers: Simplifying Training of Large, Sparse Models
BASE Layers: Simplifying Training of Large, Sparse Models
M. Lewis
Shruti Bhosale
Tim Dettmers
Naman Goyal
Luke Zettlemoyer
MoE
33
274
0
30 Mar 2021
EnergyVis: Interactively Tracking and Exploring Energy Consumption for
  ML Models
EnergyVis: Interactively Tracking and Exploring Energy Consumption for ML Models
Omar Shaikh
Jon Saad-Falcon
Austin P. Wright
Nilaksh Das
Scott Freitas
O. Asensio
Duen Horng Chau
27
18
0
30 Mar 2021
Retraining DistilBERT for a Voice Shopping Assistant by Using Universal
  Dependencies
Retraining DistilBERT for a Voice Shopping Assistant by Using Universal Dependencies
P. Jayarao
Arpit Sharma
21
2
0
29 Mar 2021
ViViT: A Video Vision Transformer
ViViT: A Video Vision Transformer
Anurag Arnab
Mostafa Dehghani
G. Heigold
Chen Sun
Mario Lucic
Cordelia Schmid
ViT
30
2,093
0
29 Mar 2021
On the Adversarial Robustness of Vision Transformers
On the Adversarial Robustness of Vision Transformers
Rulin Shao
Zhouxing Shi
Jinfeng Yi
Pin-Yu Chen
Cho-Jui Hsieh
ViT
33
138
0
29 Mar 2021
Machine Learning Meets Natural Language Processing -- The story so far
Machine Learning Meets Natural Language Processing -- The story so far
N. Galanis
P. Vafiadis
K.-G. Mirzaev
G. Papakostas
38
7
0
27 Mar 2021
Data Augmentation in Natural Language Processing: A Novel Text
  Generation Approach for Long and Short Text Classifiers
Data Augmentation in Natural Language Processing: A Novel Text Generation Approach for Long and Short Text Classifiers
Markus Bayer
M. Kaufhold
Björn Buchhold
Marcel Keller
J. Dallmeyer
Christian A. Reuter
31
114
0
26 Mar 2021
Vision Transformers for Dense Prediction
Vision Transformers for Dense Prediction
René Ranftl
Alexey Bochkovskiy
V. Koltun
ViT
MDE
45
1,667
0
24 Mar 2021
FastMoE: A Fast Mixture-of-Expert Training System
FastMoE: A Fast Mixture-of-Expert Training System
Jiaao He
J. Qiu
Aohan Zeng
Zhilin Yang
Jidong Zhai
Jie Tang
ALM
MoE
45
94
0
24 Mar 2021
Representing Numbers in NLP: a Survey and a Vision
Representing Numbers in NLP: a Survey and a Vision
Avijit Thawani
Jay Pujara
Pedro A. Szekely
Filip Ilievski
32
114
0
24 Mar 2021
Finetuning Pretrained Transformers into RNNs
Finetuning Pretrained Transformers into RNNs
Jungo Kasai
Hao Peng
Yizhe Zhang
Dani Yogatama
Gabriel Ilharco
Nikolaos Pappas
Yi Mao
Weizhu Chen
Noah A. Smith
44
63
0
24 Mar 2021
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning
  Performance of GPT-2
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2
Gregor Betz
Kyle Richardson
Christian Voigt
ReLM
LRM
24
30
0
24 Mar 2021
Multi-view 3D Reconstruction with Transformer
Multi-view 3D Reconstruction with Transformer
Dan Wang
Xinrui Cui
Xun Chen
Zhengxia Zou
Tianyang Shi
Septimiu Salcudean
Z. J. Wang
Rabab Ward
ViT
22
87
0
24 Mar 2021
The NLP Cookbook: Modern Recipes for Transformer based Deep Learning
  Architectures
The NLP Cookbook: Modern Recipes for Transformer based Deep Learning Architectures
Sushant Singh
A. Mahmood
AI4TS
60
94
0
23 Mar 2021
How to decay your learning rate
How to decay your learning rate
Aitor Lewkowycz
41
24
0
23 Mar 2021
Are Neural Language Models Good Plagiarists? A Benchmark for Neural
  Paraphrase Detection
Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection
Jan Philip Wahle
Terry Ruas
Norman Meuschke
Bela Gipp
30
34
0
23 Mar 2021
Detecting Hate Speech with GPT-3
Detecting Hate Speech with GPT-3
Ke-Li Chiu
Annie Collins
Rohan Alexander
AILaw
25
108
0
23 Mar 2021
Tiny Transformers for Environmental Sound Classification at the Edge
Tiny Transformers for Environmental Sound Classification at the Edge
David Elliott
Carlos E. Otero
Steven Wyatt
Evan Martino
21
15
0
22 Mar 2021
End-to-End Trainable Multi-Instance Pose Estimation with Transformers
End-to-End Trainable Multi-Instance Pose Estimation with Transformers
Lucas Stoffl
Maxime Vidal
Alexander Mathis
ViT
23
49
0
22 Mar 2021
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
Julia Kreutzer
Isaac Caswell
Lisa Wang
Ahsan Wahab
D. Esch
...
Duygu Ataman
Orevaoghene Ahia
Oghenefego Ahia
Sweta Agrawal
Mofetoluwa Adeyemi
20
269
0
22 Mar 2021
Improving and Simplifying Pattern Exploiting Training
Improving and Simplifying Pattern Exploiting Training
Derek Tam
Rakesh R Menon
Joey Tianyi Zhou
Shashank Srivastava
Colin Raffel
21
149
0
22 Mar 2021
DeepViT: Towards Deeper Vision Transformer
DeepViT: Towards Deeper Vision Transformer
Daquan Zhou
Bingyi Kang
Xiaojie Jin
Linjie Yang
Xiaochen Lian
Zihang Jiang
Qibin Hou
Jiashi Feng
ViT
42
510
0
22 Mar 2021
Attribute Alignment: Controlling Text Generation from Pre-trained
  Language Models
Attribute Alignment: Controlling Text Generation from Pre-trained Language Models
Dian Yu
Zhou Yu
Kenji Sagae
21
37
0
20 Mar 2021
Paint by Word
Paint by Word
A. Andonian
David Bau
Audrey Cui
YeonHwan Park
Ali Jahanian
Antonio Torralba
A. Oliva
DiffM
20
125
0
19 Mar 2021
GPT Understands, Too
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
87
1,146
0
18 Mar 2021
GLM: General Language Model Pretraining with Autoregressive Blank
  Infilling
GLM: General Language Model Pretraining with Autoregressive Blank Infilling
Zhengxiao Du
Yujie Qian
Xiao Liu
Ming Ding
J. Qiu
Zhilin Yang
Jie Tang
BDL
AI4CE
36
1,492
0
18 Mar 2021
Towards Few-Shot Fact-Checking via Perplexity
Towards Few-Shot Fact-Checking via Perplexity
Nayeon Lee
Yejin Bang
Andrea Madotto
Madian Khabsa
Pascale Fung
AAML
13
90
0
17 Mar 2021
How Many Data Points is a Prompt Worth?
How Many Data Points is a Prompt Worth?
Teven Le Scao
Alexander M. Rush
VLM
66
296
0
15 Mar 2021
A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive
  Architectures for Developmental Robots
A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive Architectures for Developmental Robots
T. Taniguchi
Hiroshi Yamakawa
Takayuki Nagai
Kenji Doya
M. Sakagami
Masahiro Suzuki
Tomoaki Nakamura
Akira Taniguchi
28
23
0
15 Mar 2021
Revisiting ResNets: Improved Training and Scaling Strategies
Revisiting ResNets: Improved Training and Scaling Strategies
Irwan Bello
W. Fedus
Xianzhi Du
E. D. Cubuk
A. Srinivas
Nayeon Lee
Jonathon Shlens
Barret Zoph
31
298
0
13 Mar 2021
Inductive Relation Prediction by BERT
Inductive Relation Prediction by BERT
H. Zha
Zhiyu Zoey Chen
Xifeng Yan
29
54
0
12 Mar 2021
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
  Representation
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
J. Clark
Dan Garrette
Iulia Turc
John Wieting
36
210
0
11 Mar 2021
Integration of Convolutional Neural Networks in Mobile Applications
Integration of Convolutional Neural Networks in Mobile Applications
Roger Creus Castanyer
Silverio Martínez-Fernández
Xavier Franch
29
12
0
11 Mar 2021
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio
  Representation
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation
Daisuke Niizumi
Daiki Takeuchi
Yasunori Ohishi
N. Harada
K. Kashino
SSL
38
175
0
11 Mar 2021
CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review
CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review
Dan Hendrycks
Collin Burns
Anya Chen
Spencer Ball
ELM
AILaw
23
184
0
10 Mar 2021
Pretrained Transformers as Universal Computation Engines
Pretrained Transformers as Universal Computation Engines
Kevin Lu
Aditya Grover
Pieter Abbeel
Igor Mordatch
28
217
0
09 Mar 2021
Knowledge Evolution in Neural Networks
Knowledge Evolution in Neural Networks
Ahmed Taha
Abhinav Shrivastava
L. Davis
49
21
0
09 Mar 2021
Previous
123...223224225...228229230
Next